Symon

Members
  • Posts

    93
  • Joined

  • Last visited

Everything posted by Symon

  1. For me it works pretty well. I have 5 Windows Server 2016 running and 2 Windows 10 with GPU passthrough and several dockers. The only problem I'm experiencing so far is that the plex docker sometimes seems to cause issues with the pci network card that comes with the MB and I have to reboot the host to get it running again. My Hardware: MB: ASUS ROG Zenith Extreme CPU: AMD Threadripper 1950X RAM: 64 GB HyperX Predator (stable at 3066 MHz) HDD: 2 x 6 TB & 1 x 8 TB WD Red for Raid SSD: 2 x 512GB M.2 Samsung 960 Pro for Cache (Cache with VM's /Dockers) GPU Slot 1: ASUS Radeon R5 230 for UnRaid Host GPU Slot 2: GTX 950 (passthorugh) GPU Slot 3: GTX 1080 (passthorugh)
  2. What do you mean by that? I'm using a 1950X Threadripper and the only thing I did was allocating certain cores to VMs. Do I have to changes something else?
  3. Running the following setup with 6.4.1 : MB: ASUS ROG Zenith Extreme CPU: AMD Threadripper 1950X RAM: 64 GB HyperX Predator HDD: 2 x 6 TB & 1 x 8 TB WD Red for Raid SSD: 2 x 512GB M.2 Samsung 960 Pro for Cache GPU Slot 1: ASUS Radeon R5 230 for UnRaid Host GPU Slot 2: GTX 950 (passthorugh) GPU Slot 3: GTX 1080 (passthorugh) Running: 2 x Windows 10 with GPU patthrough 5 x Windows Server 2016 Docker: Plex, let's encrypt, Delunge,... This is my setup and it runs more or less stable wich 6.4.1. I get an instant crash when I try to test cpu performance with passmark performancetest and one crash while the VM was idle (I experienced strange graphic errors afterwards so I think it maybe had something to do with the GPU. > Maybe was related to screen power save which I disabled afterwards ). I've unistalled passmark performancetest afterwards. The RAM are running on 2000 MHZ as I wasn't able to start up with 3333 but i haven't tried to change this for a while and with the new BIOS Version. I'm not able to passthrough any USB controller as they are all paired with other devices which won't work together. And there seems to be not driver for the Mainboard Network adapter and it is only recongized as a 100 MB device. But the PCI (Running in Slot 4) that comes with the MB works at 1 Gb. Happy to help anyone who has questions as far as I can, as I'm still a noob with unRaid .
  4. It works for me with 6.4.0. the same way as you described it. Do you use a different version? Maybe a hardware issue? I also tested with a program, wether trim is working and it seems to run correctly.
  5. I saw today that all SSDs have been trimmed, even the unassigned one
  6. Thanks for your work, this script works perfectly. I changed it a little bit so I can disable backups for certain disks of a VM. In my case I didn't want to backup a disk where I store games on, as it would take forever and use a lot of space, but backup the rest of the VM. So with the following entry, a vdisk will not be backed up if it's named nobak.img. This part can be added after the .img / .qcow code around line 971. # ----------------------------------------------------------- # check if filename is "nobak.img" and if so don't create a backup of this vdisk # ----------------------------------------------------------- if [ "$new_disk" == "nobak.img" ]; then echo "warning: $disk of $vm is not backed up because it is named nobak.img" continue fi
  7. Did you format the disk first? (You need to enable destuctive mode first in unassigned devices plugin) I was able to mount it afterwards. I used xfs for the disk format as it should provide trim possibility for the disk. Unfortunately, I can't tell you if the trim plugin works for unassigned devices afterwards or additional configuration ins necessary as I can't find any log entries in the system log for the trim task.
  8. Hi all I just recently installed unRAID and am still not sure about how to setup my system to get most out of the hardware. I hope somebody can help me with this. I have the following disks setup: Disks HDD: 2 x 6 TB WD Red for Array HDD: 1 x 8 TB WD Red for Array SSD (NVMe): 2 x 512GB M.2 Samsung 960 Pro SSD (normal): 512 GB Samsung 850 PRO 1 x Open slot for additional M.2 SSD available Things I want to run on my Server: 1 X VM as Gaming Computer 5 X VMs Windows Server 2016 (mostly idle) Several Dockers (Plex, Teamspeak, WebServers ...) In my current Setup I use the disks like this: HDD: Created a 12 TB Array 2x SSD (NVMe): Created a Raid 0 for cache which runs all VMs and Dockers on it SSD (normal): used as unassigned device (with the plugin) formated as XFS for the Gaming VM for a second vdisk to store game data (I don't want to backup games and they use a lot of space) Weekly, I automatically back up all VMs and dockers to the array Problem: I feel that since I added the additional SSD as game strorage, the performance of the gaming VM is not as good as it was before. Question: Can unRAID use the potential of NVMe disk for the cache partition as I configured it right now? Do I need to install any special driver or a special configuration for this? I want to reduce the impact of the server VMs and Dockers to the gaming VM as far as it makes sense. If NVMe SSDs in the cache pool dont make sense, I could think of another way for configuration: HDD: 12 TB Array SSD (normal): Use for cache and Dockers 1x SSD (NVMe): Use with passthrough for gaming VM 1x SSD (NVMe): Use with unassigned Devices to run the server VMs (formated as XFS?) The problem with this setup would be how to backup the gaming VM? And the backup would be really big as i need to backup the games as well. As a last option I could think of adding an additional SSD (NVMe) and create a Raid 10 cache together with the normal SSD and run everything on that but I'm worried that the normal SSD would slow down the speed of the NMVe ones. Any help is appreciated Thank you Simon
  9. Update: if somebody else has to do this The following steps worked for me and the VMs are running stable so far. Installing the drivers before converting / moving the disk is not necessary. - Create a Backup of the Servers - Shut down servers and convert vdisks to raw format (I used qmu-img with the following command ) .\qemu-img convert -O raw "D:\location\disk.vhdx" 'G:\location\vdisk1.img' - Create new VM in unRAID for Server 2016 and disable automatic start - Replace vdisk1 one with the converted disk. Set disk type to SATA - Add second disk with 1M size and raw format - Add newes VirtIO drivers ISO to VM (https://fedoraproject.org/wiki/Windows_Virtio_Drivers) - Start VM and install VirtIO Drivers on the device manager (same way as shown in the guide below) - Replace GPU drivers and Install Guest driver (as shown in the guide below) - Stop VM and remove disk 2 and change disk 1 to raw format done
  10. I like the description But it's strange that unRAID still was able to start (with a strange IP and therefore only accessible from the host itself) and the USB controller didn't show up in the VM management. As I understand this, if unRAID flash device would have been in the same controller group I've tried to pass through, then this should not have been possible anymore? Or is this a protection to prevent this scenario? I will later try it again with the front panel USB connectors..
  11. Shouldn't be the case as the USB stick showed up in a different IOMMU group (I did check this with an SSH connection and the following command) for usb_ctrl in $(find /sys/bus/usb/devices/usb* -maxdepth 0 -type l); do pci_path="$(dirname "$(realpath "${usb_ctrl}")")"; echo "Bus $(cat "${usb_ctrl}/busnum") --> $(basename $pci_path) (IOMMU group $(basename $(realpath $pci_path/iommu_group)))"; lsusb -s "$(cat "${usb_ctrl}/busnum"):"; echo; done
  12. Using 6.4 and that error showed up when i tried to pass through a USB controller on my system yesterday. I used the UnRaid GUI on the host to remove the "vfio-pci.ids" settings for my USB controller from the syslinux configuration and afterwards it worked again. However, I only started with UnRaid and maybe also configured something wrong My entry in the syslinux was : append pcie_acs_override=downstream vfio-pci.ids=1022:145c initrd=/bzroot And this is the IOMMU group of the USB controller I want to pass through: 44:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device [1022:145a] 44:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:1456] [RESET] 44:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller [1022:145c] Those lines show up on every USB controller (using Threadripper and ASUS ROG Zenith) 44:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device [1022:145a] 44:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:1456]
  13. Hi all, I'm new to UnRaid and currently planing a migration from a Hyper-V host to UnRaid/KVM. As this is my first migration of this kind and I'm not familiar with UnRaid yet, I want to ask in this forum for some tips and to see if I've forgotten something. Currently im Running a Hyper-V host with 5 Windows Servers 2016 on it. I need these servers for a little project but it is a testing environment and thus not very critical to run stable. These Servers don't need much power and are mostly running idle. Additionally, I'd like to run some dockers for Plex and other applications. Now as I'm also planning to run a gaming VM on the host, I decided to replace the Hyper-V server with UnRaid/KVM. Unfortunately, its the same machine and thus, the migration is a bit tricky. Hardware: MB: ASUS ROG Zenith Extreme CPU: AMD Threadripper 1950X RAM: 64 GB HyperX Predator HDD: 2 x 6 TB & 1 x 8 TB WD Red for Raid SSD: 2 x 512GB M.2 Samsung 960 Pro for Cache GPU1: GTX 1080 for Gaming VM GPU2: GTX 950 for normal Winows 10 VM for my wife GPU3: ASUS Radeon R5 230 for UnRaid Host I'm aware of the Threadripper and GPU pass through problem. But thanks to the effort of some members here, as a solution to this problem seems to get closer and I decided to risk it and give it a try My first question is: Do you see any problems with the hardware? I was thinking of adding an additional 512GB M2. to create a cache raid with a total of 1 TB to run all the VM's on but I'm worried that this would slow down the cache read/write speed. Alternatively I could back them up once a week onto the normal raid and let the cache without protection (One M2 for the servers/dockers and the other for both W10 installations). The MoBo also came with an additional Network Card (10GB) > Do you see any way this can be usefull in this setup? For the Migration, I planned the following steps: - To lower the risk, I will first get the virtualized servers running on a second Hyper-V host as a backup - After this, I will installt the VirtIO drivers on the Windows Servers - Shut down the servers and convert the disk files to the qcow2/RAW format, according to this post: https://shoup.io/migrate-hyper-v-windows-guest-to-kvm-w-libvirtd.html - Create a new virtual machine within Unraid an chose the converted disk as primary disk - Shut down the VM and edit the XML: Locate the <disk> section for your primary virtual disk. Remove the <address> line completely. Change the bus='virtio' from the <target> section to bus='ide' - Start up the VM General question for the migration: would you try to convert the whole Hyper-V host and run it on UnRaid (virtualization in virtualization) or run the different servers directly on UnRaid? Also, according to this description, the disks should be converted to qcow2 format. In the limetech wiki RAW is recommended for better perfomance. Which one would you recommend? I hope this works the way I want it to and would be thankful for any advise or tips Cheers Simon