gelmi

Members
  • Posts

    31
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by gelmi

  1. Hi, Currently with a ZFS plugin I have created ZFS pool mirror from 2 x SSD (I guess I would be able to import that as native ZFS pool after 6.12). I have another server (main) with TrueNas Scale in other location and running there couple of VMs. TrueNas creates their VM disks as ZVOL as far as I know. I am creating daily snapshots independently for each VM (snapshots of the whole zvols). I would like to use 'zfs send | zfs recv' to replicate these snapshots as backup from TrueNas to Unraid. I think there should be no issue with that with a plugin installed (correct me if I am wrong). Since, both Unraid and TrueNas Scale uses KVM, I was wondering if I could use Unraid VMs as temporary/backup VMs in case when there is some hardware failure on the main server - probably just for couple of hours? ZVOLs are block storages and I can see that they are being added as virtual block storage in /dev/zd(x). I can check which one is which by running 'ls -l /dev/zvol/<poolname>'. I have briefly tested that I could just add that block storage to the Ubuntu VM as: <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/zd0'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> I would also add similar RAM/CPU resources as on TrueNas machine and I would already have a reverse-proxy in place to match CloudFlare/Domains (DNS) setup. Couple of question here: 1. Should I use cache='writeback' or cache='none'? 2. Should I use bus='sata' or virtio? After TrueNas VMs are operational again, I would manually move changes in databases (only timeseries data) from Unraid to TrueNas VM databases, so everything is up to date and I do not lose any data between the snapshot and the time when TrueNas machine failed. That way I do not have to deal with transferring snapshots from Unraid to TrueNas. Is everything OK with this idea or should I be worried about something? The alternative would be to have an additional TrueNas server as fallback or some cloud hosting fallback architecture.
  2. @baby0nboardInSg I got my RX550 working after upgrade from 6.9.2 to 6.11.5 with these steps. Not ideal, but a good bandaid
  3. I am still not sure what is the core issue here, but I have a workaround (not a complete fix though). After upgrade from 6.9.2 to 6.11.5 all of my VMs that had RX550 passed through got: - black screen - 1 core stuck at 100% and nothing was happening. To have them running again I did the following things: 1. Backup both VM disks and XML files. 2. Deleted VMs and created them again with only one GPU - virtual. I use SeaBIOS in all of them and I switched to Q35 (7.1) for both my Ubuntu and W10 based VMs (I had issues with i440fx). 3. Added the same VM disks to each VM. 4. Run them and via VNC, I deleted all the GPU drivers (AMD Cleanup Utility for Windows and followed removal procedure for Ubuntu here). 5. Reboot and then shutdown. 6. Added RX550 as a second GPU (+audio) and started VM. 7. Installed the newest drivers from the AMD website (followed by step by step installation instructions for Ubuntu). At some point (both W10 and Ubuntu) of driver installation I could see a second display via RX550 GPU - that was really good Reboot. 8. Set the monitor as my main display in display settings and virtual 'monitor' as a secondary and I set the displays layout in a way that the virtual one is in the top-right corner of the main one. 9. Reboot. 10. Backup After that I always have GPU passthrough running perfectly. This is not ideal, since the second 'screen' is still there are using some of the resources, but I never got it working when using only one GPU (external, passthrough), without the virtual one. When I did that I always ended up with black screen again and when doing Force Shutdown of the VM, something was breaking my VM and I ended up with 'No Bootable Disk' text when starting VM again (I needed to create a fresh VM with the same disks). As I said, this is not ideal, but hopefully would allow some of you (and me) to stay with the latest Unraid version and still have their GPU passthrough in VMs without getting black screen.
  4. Guys, any progress on that? I have the same issue with passing through RX550 into VMs (SeaBIOS, both i440fx/Windows10 and Q35/Ubuntu). Everything working fine on 6.9.2, but not when I upgrade to 6.11.5. I had to downgrade. Something is breaking KVMs. When this will be fixed?
  5. Yes, you can run Unraid headless. I have 2 external GPUs and sometimes one is for 1st VM, the other is for 2nd VM, both VMs are running and Unraid still works. The only thing is that after I shut down VM I cannot reassign my primary GPU for Unraid console, but this is due to my GPU. Some GPUs can be switched back via script. I need to restart my tower when I want to use the GPU by Unraid console.
  6. Hi, I observe the same symptoms - no web interface, no ping - after some hours. I have updated BIOS with C-states disabled some time ago and --c6-disable is in the script. However, today I tried to unplug and plug back in network cable when I could not access web interface and after couple of seconds, everything went back to normal. I have a tower connected to a WiFi bridge. Diagnostic is attached. I do not know if this is related to Unraid (upgraded to 6.5 couple of days ago and this problem started to occur) or to WiFi bridge. darktower-diagnostics-20180327-0826.zip
  7. Little update. I was able to pull off 24h RAM test without errors (Mutli-threaded mode) for 2667MHz and 2933MHz. After that I started correction parity check on 2933MHz 3 times. First one found and corrected ~300 errors and following two found 0 errors. I think I will stay on 2933MHz and not to push for 3200MHz with next BIOS updates until I move my array disks into the new storage box. Thanks for all your help.
  8. Understood. I am in process of assembling only NAS with dockers machine to store my data with ECC RAM, but for now I have to share a PC between NAS and workstations (VM). In the mean time, probably I will switch to 2667 MHz just to be on the safe side. Thanks.
  9. OK. But after changing to 2933, 12h OK Memtest and 0 errors on second correcting parity check run, would it be safe to assume that 3200 OC was causing the problem?
  10. My Flash configuration is to force EFI boot. Memtest in Mutli-threaded mode started spilling errors just after 5 minutes. I recently updated the BIOS and that was the first time I could set 3200 MHz on RAM and boot. I have returned to 2933 and started Memtest in Mutli-threaded mode again. No errors so far. I will let it run for the night and if there are no errors, I will do parity check once more. Does this sound like a plan?
  11. Do you mean a memtest from EFI booting menu (UNRAID, UNRAID GUI, SAFEMODE, MEMTEST)?
  12. Due to some hard resets while GPU passing through to VM, standard parity check after booting showed parity errors. I run correcting parity check 2x and every time it showed errors. SMART tests on both HDD disks (parity and data) show no errors. I have Asus X370 Pro board where all SATA are going through X370 chipset (no Marvel controller). Attaching my diagnostic logs. Any ideas how to fix these parity errors? darktower-diagnostics-20180222-1643.zip
  13. In group 19 you have on-board audio and SATA, so you cannot pass only one thing from the group. Which BIOS version do you have? I have Asus X370 and couple of days ago, new version with new AGESA was released and for the first time my on-board audio is in its own IOMMU group, without ACS patch.
  14. Glad I could help. Script is a good way to go. The problem is with (re)initialization of NVIDIA cards. NVIDIA does not like when consumer GPUs are used for virtualization. They would like you to use their enterprise cards. I still have to restart or sleep/wake up Unraid system in order to reboot linux VM that uses my old GTS 450.
  15. Have you tried to add two GPU NVIDIA cards to one VM at the same time and see how it shows in Device Manager? Or maybe VNC (as primary GPU) and GPU from 1st slot (as a secondary).
  16. OK, so if both cards work fine in slot 2, so the problem is with sharing/passing through GPU that is boot GPU for Unraid in slot 1. Have you checked either of these solutions? https://arseniyshestakov.com/2016/03/31/how-to-pass-gpu-to-vm-and-back-without-x-restart/ <------------ the binding/unbinding solution What is your current Unraid RC version?
  17. OK, that is strange. So, let me get this straight: + You have two identical GPUs, first in slot 1 and second in slot? + The GPU in slot 1 does suffer from black screen problem, but the second GPU (identical NVIDIA) in second slot works? + Also, when you switch the cards, the card from slot 2, whey you put it into slow 1 does not work, right? If that is correct, maybe try to put card from slot 1 to slot 3 (only 8x) just for testing, so the cards are in slots 2 and 3. Check which of them will not work with VM. Maybe it is not the issue of a PCIe slot 1, but rather that the card cannot be passed to VM after it was initialized for host PC unraid console? Do you run Unraid UEFI or EFI boot?
  18. I have 2 GPUs: first PCIe for RX560 and second PCIe for GTS 450. I can use either of them for Windows and Ubuntu VM. Try to reboot PC and start VM one more time. Maybe your GPU suffer from reset bug, so it can be initialized only once in VM after boot. If this does not work, share some more infrormation about your configuration: BIOS version, which PCIe slot do you use, exact RC version. Did you try to dump GPU ROM and feed it with VM configuration?
  19. In the VM I set: * CPU Mode: Host Passthrough * Machine: i 440fx-2.9 * BIOS: SeaBIOS * Hyper-V: Yes * Graphics card and Sound Card: Choose both NVIDIA devices That should be it. Check this and post if it works or not.
  20. What VM OS do you want to run? Linux or Windows?
  21. I am already passed that. Now my problem is this
  22. Hi, I have a problem with GPU passthrough. I am using Unraid 6.3.5 (trial version). I have only one GPU - RX 550. I am trying to pass it through to a VM with Ubuntu, but no luck so far. I do not have additional GPU at home to test. I have tried 1st and 2nd PCIe slot (with and without ACS patch). I am on newest BIOS 0902 (ASUS X370 Pro with Ryzen 1600 and 16GB RAM, RX 550). GPU displays Unraid text console on monitor when I turn on PC. When I create VM with VNC, I can see it is booting from ISO, but when I choose RX 550 gpu and start VM, nothing happens - I can still see login prompt on the monitor. Any ideas for to pass through only one GPU in my case? Info: Model: Custom M/B: ASUSTeK COMPUTER INC. - PRIME X370-PRO CPU: AMD Ryzen 5 1600 Six-Core @ 3200 HVM: Enabled IOMMU: Enabled Cache: 576 kB, 3072 kB, 16384 kB Memory: 16 GB (max. installable capacity 64 GB) Network: bond0: fault-tolerance (active-backup), mtu 1500 eth0: 100 Mb/s, full duplex, mtu 1500 Kernel: Linux 4.9.30-unRAID x86_64 IOMMU: IOMMU group 0 [1022:1452] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1452 IOMMU group 1 [1022:1453] 00:01.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 1453 IOMMU group 2 [1022:1452] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1452 IOMMU group 3 [1022:1452] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1452 IOMMU group 4 [1022:1453] 00:03.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 1453 IOMMU group 5 [1022:1452] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1452 IOMMU group 6 [1022:1452] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1452 [1022:1454] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 1454 [1022:145a] 29:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 145a [1022:1456] 29:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Device 1456 [1022:145c] 29:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Device 145c IOMMU group 7 [1022:1452] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1452 [1022:1454] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 1454 [1022:1455] 2a:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 1455 [1022:7901] 2a:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) [1022:1457] 2a:00.3 Audio device: Advanced Micro Devices, Inc. [AMD] Device 1457 IOMMU group 8 [1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 59) [1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51) IOMMU group 9 [1022:1460] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1460 [1022:1461] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1461 [1022:1462] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1462 [1022:1463] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1463 [1022:1464] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1464 [1022:1465] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1465 [1022:1466] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1466 [1022:1467] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Device 1467 IOMMU group 10 [1022:43b9] 03:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] Device 43b9 (rev 02) [1022:43b5] 03:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] Device 43b5 (rev 02) [1022:43b0] 03:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b0 (rev 02) [1022:43b4] 1d:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b4 (rev 02) [1022:43b4] 1d:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b4 (rev 02) [1022:43b4] 1d:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b4 (rev 02) [1022:43b4] 1d:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b4 (rev 02) [1022:43b4] 1d:06.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b4 (rev 02) [1022:43b4] 1d:07.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 43b4 (rev 02) [1b21:1343] 25:00.0 USB controller: ASMedia Technology Inc. Device 1343 [8086:1539] 26:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03) IOMMU group 11 [1002:699f] 28:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Lexa PRO [Radeon RX 550] (rev c7) [1002:aae0] 28:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Device aae0