Leaderboard

Popular Content

Showing content with the highest reputation on 02/23/20 in all areas

  1. Summary: Support Thread for ich777 Gameserver Dockers (CounterStrike: Source & ConterStrike: GO, TeamFortress 2, ArmA III,... - complete list in the second post) Application: SteamCMD DockerHub: https://hub.docker.com/r/ich777/steamcmd All dockers are easy to set up and are highly customizable, all dockers are tested with the standard configuration (port forwarding,...) if the are reachable and show up in the server list form the "outside". The default password for the gameservers if enabled is: Docker It there is a admin password the default password is: adminDocker Please read the discription of each docker and the variables that you install (some dockers need special variables to run). The Steam Username and Password is only needed in templates where the two fields are marked as requirde with the red * Created a Steam Group: https://steamcommunity.com/groups/dockersforunraid If you like my work, please consider making a donation
    1 point
  2. I've spent a few weeks getting my X570 AORUS Elite WiFi + 3900X + GTX1070 running to my liking so I thought I would share. These settings are also confirmed working on the AORUS Pro WiFi and AORUS Ultra. It will probably be similar for all the X570 AORUS boards. Here are the settings for USB passthrough, single Nvidia GPU passthrough, and more. Using BIOS F10. This is important as your IOMMU groupings can/will change with AGESA updates. UEFI / BIOS Settings: Tweaker -> Advanced CPU Settings -> SVM Mode -> Enable Settings -> Miscellaneous -> IOMMU -> Enable Settings -> AMD CBS -> ACS Enable -> Enable Settings -> AMD CBS -> Enable AER Cap -> Enable USB Passthrough: Leaving PCIe ACS override disabled, you should have ~34 IOMMU groups (give or take depending on how many PCIe devices you have connected) if you look in Tools > System Devices. There should be 3 USB controllers with the same vendor/device ID (1022:149c). Two of them will be lumped together with a PCI bridge and "Non-Essential Instrumentation." Those are the two we want to pass! The more logical option would be the controller isolated in its own group, but I could NOT get that one to pass. The trick is run your Unraid USB off that third controller, and we can pass the other two controllers together. Run your Unraid USB out of the rear white USB port labeled BIOS. That white USB 3.0 port plus the neighboring 3 blue USB 3.0 ports share a controller. Use these other ports for your keyboard and mouse (to be passed through as devices) and your UPS or whatever else you want Unraid to access. Note the addresses of the two USB controllers AND the "Non-Essential Instrumentation" in that IOMMU. In my case they are 07:00.0, 07:00.1, 07:00.3. Create the file /boot/config/vfio-pci.cfg with the following contents: When you reboot, these devices will available in the vm xml gui to passthrough under Other PCI Devices. Pass all 3 of them together! If you do not pass the "Non-Essential Instrumentation" Unraid will throw a warning in the logs that the .1 controller is dependent on it and unavailable to reset. When you passthrough all three together you will get no errors/warnings and everything works. Bonus: Bluetooth on this board is a usb device tied to the .3 controller and is passed through along with the controller! Note: When you add or remove PCIe devices, these addresses can/will change. When you add or remove a PCIe device, check Tools > System Devices to see if the USB addresses have changed and update vfio-pci.cfg accordingly. Single (NVIDIA) GPU Passthrough: For single GPU passthrough, you need to disable graphical output in Unraid. From the Main menu, click the name of your Boot Device (flash). Under Syslinux Config -> Unraid OS, add "video=efifb:off" after "append initrd=/bzroot". The line should now read "append initrd=/bzroot video=efifb:off". When you reboot you will notice there is no video output when unraid boots (you will be left with a freeze frame of the boot menu). Your solo GPU is now ready to pass. For Nvidia you will need the vbios for your card. I dumped my own following this tutorial using second gpu. If you can't dump your own, trying following this tutorial to download/modify a working vbios. Now simply pass your GPU, vbios, and the Sound Card that goes with your GPU from the vm xml gui. Fan Speed Sensors and PWM Controllers: See warning below! You can already see your CPU temp (Tctl) using the k10temp driver with Dynamix System Temperature. If you want to see Fan Speeds on your dashboard, or use the Dynamix Auto Fan Control plugin, we can force the it87 driver to load for the it8628 on this board. To force this we need to set another boot flag, "acpi_enforce_resources=lax". Add this the same way as above, after "video=efifb:off". That line in your syslinux.cfg should now read "append initrd=/bzroot video=efifb:off acpi_enforce_resources=lax". Next, add the following line to /boot/config/go The it87 driver will now load on boot and your fan speeds will be displayed on the Unraid dashboard, and the fan controllers will be available in Dynamix Auto Fan Control. Warning: Setting acpi_enforce_resources to lax is considered risky for reasons explained here.
    1 point
  3. Please add drivers for ASpeed BMCs used on varios server motherboards. https://www.aspeedtech.com/support.php They are free to be distributed acording to the readme.txt
    1 point
  4. Most likely causes: network cable has broken wire in it ... solution: try a different cable. cable connection is poor .... solution: check both ends of the network cable is fully plugged into the sockets. If either end has a broken clip on the connector, replace cable. port in hub is going wonky .... solution: try another port. hub is going wonky .... solution: replace hub. network port in Unraid server is going ... solution: use a cheap plugin network card, intel based ones are usually pretty compatible with Unraid.
    1 point
  5. There was a issue raised about this on the bitwarden_rs repository. Its due to the fact that your `domain` variable dont include scheme.
    1 point
  6. Look into hairpin NAT. This is a network issue btw, not container issue. And to my knowledge lsio doesn't have a v2 container, so if you're using organizrtools/organizr-v2 you can get application support here: https://discordapp.com/invite/TrNtY7N
    1 point
  7. Answered many times in this thread. This plugin IS a version of Unraid, but with the Nvidia drivers added. Just upgrade from the plugin.
    1 point
  8. No. Just go to the plugin and install Nvidia 6.8.2 - a full version of Unraid with extra drivers.
    1 point
  9. Each additional disk is an additional point of failure. To reliably rebuild every bit of a disk, Unraid must reliably read every bit of parity PLUS every bit of ALL remaining disks. Each disk must be reliable. Any old disks which don't add any meaningful capacity aren't worth wasting a port, and are actually an unneeded risk.
    1 point
  10. It'll keep monitoring the *arr queue and when it has been imported, it'll clean up the extracted files. The ..._unpackerred folder is normal while extracting, after extraction it moves the extracted files in the original folder, so that *arr doesn't try to copy a partially extracted file.
    1 point
  11. I thought itimpi explanation was very clear that the answer is: your array can have 30 of these "devices" but outside of the array you can have however many you want. However, you are completely wrong in assuming there is "a single point of failure - being the raid card itself". Every drive behind the RAID card is a point of failure too. Moreover, you also severely compromise your ability to recover your data by mixing RAID into Unraid. The whole point of using the Unraid array is that each drive has its own drive system so you will only lose all your data if all your data drives fail. Using your (terrible) scheme of 3 RAID10 group, you can lose ALL your data with just SIX failed drives. Statistically, each parity drive can only reasonably protect for max of (a fraction under) 8 drives (based on my calculation off Backblaze HDD failure stat for 8TB+ drives). So a 30-drive array is, in my opinion, pushing your luck to the limit.
    1 point
  12. It's fine to mount old cache with UD, but you can't use dd to clone it, since it won't expand the partition and Unraid would complain of invalid partition layout, you can format the new one with UD and copy everything from the old one with mc for example, don't copy the docker image, best to recreate it, and if copying vdisks best to use command line to keep them sparse. To format new drive you need to first remove existing partitions, click on + to expand then delete on the red x.
    1 point
  13. The array limits for each of the licence levels are detailed in the notes underneath the headline price part. Not quite sure of what you mean about the hardware raid groups? As far as Unraid is concerned hardwire raid is invisible to it and all drives in a hardware raid group is presented as a single drive to Unraid This mean all recovery of individual drives within such a group has to be handled by the hardware raid. It also means you have to have a parity drive/group that is at least as large as the largest hardware raid group. If you mean that you are going to break the groups down to individual drives then the limits for drives apply.
    1 point
  14. 1). You can install the cache drive immediately. It is up to you whether individual shares are set to use the cache - probably best not to for initial load. 2). Unraid never moves files between array drives automatically. Any such process is always manual.
    1 point
  15. It depends on whether you are talking about ‘attached’ devices or ‘array’ devices? The Pro licence has a limit of 30 for ‘array’ devices but no limit on ‘attached’ devices. This is all specified on the Unraid Pricing page.
    1 point
  16. Not a good idea, when a disk fails Unraid needs to successfully read all the other drives to rebuild it (or all other but one with dual parity), though it also depends what you mean by questionable, any drive that fails the extended SMART test shouldn't be used, drives with a few reallocated sectors or other less serious SMART issues might be used, depending on your tolerance for failure.
    1 point
  17. Okay, so not just me then, thanks for at least confirming I'm not completely insane... Yes, Grafana with the dashboards found in the following article as a base, I've tweaked them quite a bit from the originals to work with my Eaton UPS with 3 outlet groups. https://technicalramblings.com/blog/setting-grafana-influxdb-telegraf-ups-monitoring-unraid/
    1 point
  18. Too many other packages for this package. I removed it. I'm not sure if it will work but it did work on my unraid dev vm. I have a bunch of other packages installed though. You need python, pip and setuptools. Then just run pip install awscli.
    1 point
  19. You can't use the stubbing method you used as all your USB controllers have the same ID. So you need to look at the new method of stubbing using the PCI ID. I don't remember which release it was added, but go through the release announcements to find it. Might have been 6.8, but might have been in one of the 6.7 releases.
    1 point
  20. In case anyone ever finds this topic again, my current unraid installation is based off Slackware 14.2. So I was able to find a slackware package site and grab the sysstat package for 14.2- wget <url> then installpkg <file> # cat /etc/*release* NAME=Slackware VERSION="14.2" ID=slackware VERSION_ID=14.2 PRETTY_NAME="Slackware 14.2 x86_64 (post 14.2 -current)" ANSI_COLOR="0;34" CPE_NAME="cpe:/o:slackware:slackware_linux:14.2" HOME_URL="http://slackware.com/" SUPPORT_URL="http://www.linuxquestions.org/questions/slackware-14/" BUG_REPORT_URL="http://www.linuxquestions.org/questions/slackware-14/" VERSION_CODENAME=current
    1 point
  21. Edit pihole‘s setting, switch to advanced view, change the repository to pihole/pihole:beta-v5.0 Make a backup of the appdata folder of pihole in case you want to revert
    1 point
  22. If the webUI is working again, no need to know, since the recommended way to affect this setting is not by editing the file, but by going to Settings - VM Manager.
    1 point
  23. Thanks @Kevinf63, I ended up just reverting an earlier backup and it seems to be working now. Thanks for replying though.
    1 point
  24. Thanks for this! Been scouring the net for a good config file and yours is the first that's worked for me!
    1 point
  25. how is LDAP setup? what other dockers are required
    1 point
  26. I have followed the 3rd method in this video to bypass one of my nics. I have a pcie network card with 2 nics i350 This is the iommu group IOMMU group 6: [1022:43bb] 01:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset USB 3.1 xHCI Controller (rev 02) [1022:43b7] 01:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset SATA Controller (rev 02) [1022:43b2] 01:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b2 (rev 02) [1022:43b4] 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 02:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 02:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [10ec:8168] 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c) [8086:1521] 05:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) [8086:1521] 05:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) I want to bypass only the last nic, or in the worst case both of them (the 2 last) which belong to the same pcie card. I have edited the syslinux config accordingly default menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 50 label unRAID OS menu default kernel /bzimage append iommu=pt vfio-pci.ids=8086:1521 initrd=/bzroot label unRAID OS GUI Mode kernel /bzimage append initrd=/bzroot,/bzroot-gui label unRAID OS Safe Mode (no plugins, no GUI) kernel /bzimage append initrd=/bzroot unraidsafemode label unRAID OS GUI Safe Mode (no plugins) kernel /bzimage append initrd=/bzroot,/bzroot-gui unraidsafemode label Memtest86+ kernel /memtest This is the pfsense VM <domain type='kvm'> <name>pfSense</name> <uuid>2769b456-605b-c3ca-fbf0-29c5f3799322</uuid> <metadata> <vmtemplate xmlns="unraid" name="FreeBSD" icon="freebsd.png" os="freebsd"/> </metadata> <memory unit='KiB'>2621440</memory> <currentMemory unit='KiB'>2621440</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='5'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-2.6'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='1'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/VMDisks/pfsense/pfSense.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0x15'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0x16'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:5a:66:6f'/> <source bridge='br1'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <interface type='bridge'> <mac address='52:54:00:f6:52:6a'/> <source bridge='br2'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='es'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </memballoon> </devices> </domain> And I get this error when I try to start the VM Execution error internal error: qemu unexpectedly closed the monitor: 2018-03-14T21:32:13.226454Z qemu-system-x86_64: -device vfio-pci,host=05:00.1,id=hostdev0,bus=pci.5,addr=0x0: vfio error: 0000:05:00.1: group 6 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver. What I'm doing wrong?
    1 point