gelmi

Members
  • Posts

    31
  • Joined

  • Last visited

  • Days Won

    1

gelmi last won the day on November 25 2017

gelmi had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

gelmi's Achievements

Noob

Noob (1/14)

6

Reputation

  1. Hi, Currently with a ZFS plugin I have created ZFS pool mirror from 2 x SSD (I guess I would be able to import that as native ZFS pool after 6.12). I have another server (main) with TrueNas Scale in other location and running there couple of VMs. TrueNas creates their VM disks as ZVOL as far as I know. I am creating daily snapshots independently for each VM (snapshots of the whole zvols). I would like to use 'zfs send | zfs recv' to replicate these snapshots as backup from TrueNas to Unraid. I think there should be no issue with that with a plugin installed (correct me if I am wrong). Since, both Unraid and TrueNas Scale uses KVM, I was wondering if I could use Unraid VMs as temporary/backup VMs in case when there is some hardware failure on the main server - probably just for couple of hours? ZVOLs are block storages and I can see that they are being added as virtual block storage in /dev/zd(x). I can check which one is which by running 'ls -l /dev/zvol/<poolname>'. I have briefly tested that I could just add that block storage to the Ubuntu VM as: <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/zd0'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> I would also add similar RAM/CPU resources as on TrueNas machine and I would already have a reverse-proxy in place to match CloudFlare/Domains (DNS) setup. Couple of question here: 1. Should I use cache='writeback' or cache='none'? 2. Should I use bus='sata' or virtio? After TrueNas VMs are operational again, I would manually move changes in databases (only timeseries data) from Unraid to TrueNas VM databases, so everything is up to date and I do not lose any data between the snapshot and the time when TrueNas machine failed. That way I do not have to deal with transferring snapshots from Unraid to TrueNas. Is everything OK with this idea or should I be worried about something? The alternative would be to have an additional TrueNas server as fallback or some cloud hosting fallback architecture.
  2. @baby0nboardInSg I got my RX550 working after upgrade from 6.9.2 to 6.11.5 with these steps. Not ideal, but a good bandaid
  3. I am still not sure what is the core issue here, but I have a workaround (not a complete fix though). After upgrade from 6.9.2 to 6.11.5 all of my VMs that had RX550 passed through got: - black screen - 1 core stuck at 100% and nothing was happening. To have them running again I did the following things: 1. Backup both VM disks and XML files. 2. Deleted VMs and created them again with only one GPU - virtual. I use SeaBIOS in all of them and I switched to Q35 (7.1) for both my Ubuntu and W10 based VMs (I had issues with i440fx). 3. Added the same VM disks to each VM. 4. Run them and via VNC, I deleted all the GPU drivers (AMD Cleanup Utility for Windows and followed removal procedure for Ubuntu here). 5. Reboot and then shutdown. 6. Added RX550 as a second GPU (+audio) and started VM. 7. Installed the newest drivers from the AMD website (followed by step by step installation instructions for Ubuntu). At some point (both W10 and Ubuntu) of driver installation I could see a second display via RX550 GPU - that was really good Reboot. 8. Set the monitor as my main display in display settings and virtual 'monitor' as a secondary and I set the displays layout in a way that the virtual one is in the top-right corner of the main one. 9. Reboot. 10. Backup After that I always have GPU passthrough running perfectly. This is not ideal, since the second 'screen' is still there are using some of the resources, but I never got it working when using only one GPU (external, passthrough), without the virtual one. When I did that I always ended up with black screen again and when doing Force Shutdown of the VM, something was breaking my VM and I ended up with 'No Bootable Disk' text when starting VM again (I needed to create a fresh VM with the same disks). As I said, this is not ideal, but hopefully would allow some of you (and me) to stay with the latest Unraid version and still have their GPU passthrough in VMs without getting black screen.
  4. Guys, any progress on that? I have the same issue with passing through RX550 into VMs (SeaBIOS, both i440fx/Windows10 and Q35/Ubuntu). Everything working fine on 6.9.2, but not when I upgrade to 6.11.5. I had to downgrade. Something is breaking KVMs. When this will be fixed?
  5. Yes, you can run Unraid headless. I have 2 external GPUs and sometimes one is for 1st VM, the other is for 2nd VM, both VMs are running and Unraid still works. The only thing is that after I shut down VM I cannot reassign my primary GPU for Unraid console, but this is due to my GPU. Some GPUs can be switched back via script. I need to restart my tower when I want to use the GPU by Unraid console.
  6. Hi, I observe the same symptoms - no web interface, no ping - after some hours. I have updated BIOS with C-states disabled some time ago and --c6-disable is in the script. However, today I tried to unplug and plug back in network cable when I could not access web interface and after couple of seconds, everything went back to normal. I have a tower connected to a WiFi bridge. Diagnostic is attached. I do not know if this is related to Unraid (upgraded to 6.5 couple of days ago and this problem started to occur) or to WiFi bridge. darktower-diagnostics-20180327-0826.zip
  7. Little update. I was able to pull off 24h RAM test without errors (Mutli-threaded mode) for 2667MHz and 2933MHz. After that I started correction parity check on 2933MHz 3 times. First one found and corrected ~300 errors and following two found 0 errors. I think I will stay on 2933MHz and not to push for 3200MHz with next BIOS updates until I move my array disks into the new storage box. Thanks for all your help.
  8. Understood. I am in process of assembling only NAS with dockers machine to store my data with ECC RAM, but for now I have to share a PC between NAS and workstations (VM). In the mean time, probably I will switch to 2667 MHz just to be on the safe side. Thanks.
  9. OK. But after changing to 2933, 12h OK Memtest and 0 errors on second correcting parity check run, would it be safe to assume that 3200 OC was causing the problem?
  10. My Flash configuration is to force EFI boot. Memtest in Mutli-threaded mode started spilling errors just after 5 minutes. I recently updated the BIOS and that was the first time I could set 3200 MHz on RAM and boot. I have returned to 2933 and started Memtest in Mutli-threaded mode again. No errors so far. I will let it run for the night and if there are no errors, I will do parity check once more. Does this sound like a plan?
  11. Do you mean a memtest from EFI booting menu (UNRAID, UNRAID GUI, SAFEMODE, MEMTEST)?
  12. Due to some hard resets while GPU passing through to VM, standard parity check after booting showed parity errors. I run correcting parity check 2x and every time it showed errors. SMART tests on both HDD disks (parity and data) show no errors. I have Asus X370 Pro board where all SATA are going through X370 chipset (no Marvel controller). Attaching my diagnostic logs. Any ideas how to fix these parity errors? darktower-diagnostics-20180222-1643.zip
  13. In group 19 you have on-board audio and SATA, so you cannot pass only one thing from the group. Which BIOS version do you have? I have Asus X370 and couple of days ago, new version with new AGESA was released and for the first time my on-board audio is in its own IOMMU group, without ACS patch.