Rive

Members
  • Posts

    24
  • Joined

  • Last visited

Everything posted by Rive

  1. Yes, it did the trick, thank you! The images were succesfully copied, lost about 10 Kb of data in one and 1.3 Mb in another - no difference, they still function as intended. (I heard about ddrescue, but I thought it was a part of DD and that thing cannot ignore I/O errors either)
  2. That fails even in sector-by-sector mode (I use Acronis True Image). I guess, the reason is that the software doesn't have direct hardware access and the system just spits out a bunch of errors and stops operation completely, expecting device/IO reset.
  3. TLDR in the beginning: I found a CLI utility called safecopy (https://safecopy.sourceforge.net/). I'd love to have it in Unraid. But it needs to be compiled - and that's not how Unraid works. How then? Explanation: I have two big VM images on an SSD cache drive with a btrfs partition. The drive is failing (in some way): more than 1000 errors in scrub, btrfs check and repair do nothing. I need to copy these two big files off of the SSD, but when I just use MC, the process gets interrupted with I/O error. Unraid logs show messages like "Apr 2 19:33:01 SRV kernel: print_req_error: critical medium error, dev nvme0n1, sector 176853640", there is lots of them. Why do I need to copy these VM images? Because they still work! Probably nothing serious damaged or lost inside the images themselves. So I know what I'm doing, I only need these two files in whatever damaged state they are now. The rest has been copied already, only these two files are left. I found a program called safecopy (https://safecopy.sourceforge.net/). Apparently it can do the job. But it needs to be compiled, there is a guide. But apparently Unraid lacks compilers and libraries and in general it's not THAT type of Linux. So, how do I compile and run safecopy on Unraid? (And if it's possible to do in a Docker, then how do I do it?) Thank you.
  4. And it did the trick. Just added module_blacklist=i915 to all the append lines in syslinux.cfg and it started to boot. Thank you.
  5. Ok, it was the integrated graphics adapter Intel GMA 3100 - I threw in a GT 210 card, and the system booted just fine. But that shouldn't have been a problem - this forum has some old messages confirming old versions of Unraid were working on this MB and other ones with Intel GMA 3100. Plus the problem technically stays. I don't really want a discrete card in my server just to boot it.
  6. Hey! Been trying to run Unraid 6.11.1 on an Asus P5E-VM DO motherboard with a Core 2 Duo E6750 CPU and 4 Gb of RAM, using the motherboard's video output. The system starts booting normally, but then at some point the screen goes black and everything stops. No network either - it doesn't get an IP from my DHCP. The machine is OK, has a Win 7 instance there. The flash drive is ok, works on another machine - boots to CLI, at least. The flash drive is fresh, no Unraid keys, no trial mode configured. Here I made a video of the boot sequence. Reminding you, that the bars mean no signal but a black screen means that there is a signal (the capture card detects something), but the signal is blank.
  7. Same problem as cherritaker has. 209.222.18.222 is down?
  8. Hey! Having problems with setting up torrents being moved to a diff directory when completed. Tried to change it in the GUI, "Autotools" section - didn't work, but the setting seems to be saved. Then the FAQ says "So rutorrent is purely a web frontend to rtorrent, and as such does NOT modify any settings for rtorrent...", and I still don't get the logic of it. Why the hell would you even have settings in GUI that just don't work? Why to add them to GUI at all? Seems like nonsense to me. Then there is rtorrent.rc file, it has a setting with my folder for torrents being downloaded (incomplete), but I don't see any settings for moving torrents to another directory after completion. BTW, I changed the default directory for downloads in the GUI, and it works the way it should - torrents are being downloaded to the new dir. And the FAQ says it shouldn't have worked.
  9. Well, here we go - after two days of working without any issues HTPC VM and pfSense VM both got frozen after I tried to open a webpage with video in it (Twitch). The diag data is attached, but even without looking into the logs I can tell that HTPC stopped because of its videocard (or USB controller) and pfSense - because of its NIC. P.S. Before I added another USB card and moved some cards to different slots the issue involved HTPC VM and SPC VM (not pfSense). srv-diagnostics-20181003-0139.zip
  10. Ok, I did one more hardware change - I added one more Renesas USB card instead of onboard ASMedia USB 3.1 controller. I used that for my HTPC VM. I haven't done much testing yet, but that seems more stable. I have only one additional question: how do I stub PCIe cards these days? pci-stub.ids? vfio-pci.ids? xen-pciback.hide? Some combination? I really can stub all hardware VMs using, not a problem, but should I?
  11. Well, another issue has happend. I started my SPC VM after booting unRAID and the whole system just got frozen (no error msgs, nothing, and no any connectivity). I reset the machine it and there was no VM tab in the web-interface. Never seen that before. The array is running. No VM tab, but I can see Docker tab. I don't understand what's going on. Tried to reboot several times, the tab still didn't reappear. Now doing parity check. The diag info file is attached. P.S. VM freeze problems started when I updated from 6.3.1 to 6.5.2. But now it's way worse. And I wish I could change hardware, but I can't. Everything was pretty stable on 6.3.1... srv-diagnostics-20180930-0352.zip UPDATE: Got the VMs back. VM functionality was disabled in the settings. Why? How? No idea.
  12. Ok, a small update. I physically tossed cards inside the machine - moved Renesas USB and a network card (the one that pfSense uses) to other PCIe slots. The situation has changed a bit: now sometimes when I start HTPC VM, pfSense freezes with its network card vfio_err_notifier_handler error (never had an ussue with pfSense VM before). BTW, testing is a little bit hard, 'cause half of the times HTPC spends a couple of minutes on Tianocore splash screen and sometimes its USB controller just doesn't work (onboard ASMedia USB 3.1) - the machine seems fine, just no controls besides stopping it from the VM manager... These issues aren't new, HTPC VM has always had them.
  13. Hello! I've always had 2 Windows 10 VMs in my unRAID, let's call them SPC and HTPC. They both have had NVIDIA GPUs and USB controllers passed through from the very beginning. And everything was fine until I updated from some older unRAID to 6.5.2. 1. Now when I start HTPC, my SPC can freeze immediately with qemu-system-x86_64: vfio_err_notifier_handler(harware address) Unrecoverable error detected. Please collect any data possible and then kill the guest In this case the hardware failed is usually the video card with its audio card (sometimes it's Renesas USB card). 2. If I manage to start HTPC and then restart SPC, HTPC might freeze randomly after some time with the same error for its videocard (with its audiocard). 3. Or they just might freeze both randomly, if they work at the same time. What I've noticed: 1. SPC works for weeks if I don't start HTPC. (cannot test vice versa, because I need SPC everyday). 2. If HTPC freezes shortly after boot it usually happens after I go to full screen video (YT in Chrome) or after screensaver starts. What I tried: 1. Reinstalling Win10 on both VMs. 2. Slighly changing VMs parameters like memory and even recreating them completely from the ground with new drive images. Additional info: 1. I turn my unRAID server off when I don't need it (it's not 24/7). In fact I upgraded from older unRAID ver to 6.5.2 because that older ver couldn't shutdown the machine half of the time - it stayed powered on but completely dead. 6.5.2 turns the machine off properly everytime. 2. I have a 3rd VM there - pfSense - and it's never had any issues. 3. I'm running unRAID on Core i7-6850K on an Asus X99 motherboard with 32 GB of RAM. SPC: GTX 1050Ti + Renesas USB controller, HTPC: GT 740 + onboard ASMedia USB 3.1 controller.
  14. Thank you! I'll try both workarounds you offered and report the results. But something tells me that it isn't a state of a VM - looks like it's more related to the server itself, because if I'm getting this issue after server restart no matter what I try, how I restart VMs, it won't go away until I turn the server off and on again.
  15. Sorry, here it is. srv-diagnostics-20170331-1429.zip
  16. 1812, ty for taking a look at my logs! Unfortunately I always get these lines in the syslog. Here's the diagnostics file after turning off-on the server, VM is running perfectly, but the lines are still there.
  17. 1812, Done (SPC was the only VM running, performance was decreased). srv-diagnostics-20170331-0515.zip
  18. Isolated all cores but the first pair, and it didn't help, sorry. Got decreased performance just after reboot.
  19. I'm a beginner in unRAID's virtualization world, but I managed to successfully run one unRAID server with virtualized gaming machines in it (the link with the details is below). But after having problems with stability of GPU passthrogh (in short, only Q35+OVMF+Win10 works in my case) I again ran into some issues. Sometimes I turn off my unRAID server just to save electricity and sometimes, after I turn it back again, VM's perfomance turns out to be severely decreased. It happens from the start of the VMs and the only one thing that helps is turning the server off-on again (and sometimes I have to do it twice and even thrice). I have two Windows VMs and one pfSense VM at the moment, and both Windows VMs are affected (don't know how to check pfSense). Perfomance drop involves: 1. Laggy and stuttery Windows applications - i.e. Chrome starts way slower. 2. Severe issues with games: FPS is 3-5 times lower, some games take eternity to start. 3. CPU-Z clearly shows score decrease: normal score for one core is 1600-1700, decreased - around 500. Sometimes when run in that VM state CPU-Z hangs up the VM. Doesn't matter if VMs were started by autostart or were run one by one manually. Restart of a VM doesn't help, only turning the whole server off-on. It started roughly at the same time I added pfSense machine. But there's no any logic in that and even if pfSense off (autostart is off), I still get decreased performance in Windows VMs with the same chance. Logs don't contain any errors. Just some info abt my server and issues I had before:
  20. Yes, would be very nice to get some VMs booted first. ESXi has this feature, for example (boot delay/boot order).
  21. Well, looks like I've found a solution: I changed the type of machine to Q35 and now the VM works fine when the second (unactive) graphics card is plugged into MB's slot (I tried only the 3rd one). Everything is stable as rock, but I haven't tested it with a second/third/etc VM running, and I haven't tried to reinstall Windows in the VM. So I'm marking the thread as solved, but there will be updates.
  22. OK, a small update. I have been experimenting with the single graphics card in slot 1 + another graphics card installed in slot 3 (not used for anything in unRAID). The VM is the same. First, I found a flaw in my VM configuration: I passed through the GPU, but forgot about its sound device. I fixed it, and the system in the VM became a bit more stable - less white lines and other artifacts, longer functioning time in Chrome and Opera - but still far from being stable, after some time its state degrades beyond any usability. After that I tried enabling ACS Control in BIOS - the thing theoretically might help with hardware issues in virtualization, but nothing changed for me. Changing PCI-e generation to 2 (should be 3 by default) didn't help either. Still, if I remove the second card (and it could be any of three I got), the VM becomes stable as a rock.
  23. I had this several times, I don't remember for sure how I solved it - by pressing Esc or typing "exit", but after doing so I had my Windows installation started everytime.
  24. Hello, forum people! I think I really need your help. Any advices would be greatly appreciated. 10 days ago I first time installed unRAID OS and every its feature works fine, but I experience lots of issues with passing through GPUs. The idea was to create a server that hosts one gaming VM and several non-gaming VMs, all with their own videocards. What I have: Intel Core i7-6850K (not overclocked) Asus X99-E WS/USB3.1 (lastest BIOS) 32 GB of RAM A running array with one SSD cache drive 850W Corsair PSU Noname PCI-e USB controller (works fine) GPUs: MSI GTX 1050Ti (for gaming VM, tested on another PC) Asus EN8400GS, MSI GT 610, GIGABYTE 210 (all for non-gaming VMs, all tested) (BTW, I built this server online, streaming it on Twitch, if anyone interested, the video is still there, but it will be deleted in 3-4 days - https://www.twitch.tv/videos/122333373 - there are no any problems covered in that stream, only the build itself and the first unRAID run). The short story: I test only one VM for now, and I have a problem with that VM: GTX 1050i works fine only when it's the primary card (single card config) or the secondary card (the primary card works for POST, BIOS and OS console). What happens? When I install a second/third videocard in a MB slot, the VM I have starts to produce all sorts of video glitches - but only in certain circumstances. FurMark and Unigine tests usually work, but when I start Youtube or Twitch in Chrome browser, or just start Opera, the picture covers with white horizontal lines, and from that point anything could happen. Usually it just hangs after some time, but can produce all sorts of artifacts before going down. The only game I tested was DayZ. It just doesn't start if I add any GPU on top of 1050 Ti. I tested it in one Windows 10 VM (i440fx 2.7, OVMF), and it works fine in "console card + gaming card" or "single gaming card" configurations, but it throws artifacts in "gaming card + second card/third card" or "console card + gaming card + third/fourth card" configurations. 1st, 3rd, 5th and 7th PCI-e slots are recommended for videocards by the manufacturer, so I always tried to use them (but one time I used not recommended ports for additional GPU). I tried 1050Ti in 1st and 3rd slots, and I really want to have it in the 1st one, because it still will be at 16x even if I take all other slots for cards (16x-8x-8x-8x-8x-8x-8x configuration, according to the manual). Other devices, passed to the VM, work fine - the PCI-e USB controller and 2nd onboard NIC, plus I tried the onboard audiocard (but disabled it). I've already tried to remove them - it doesn't make any difference. TL;DR: One VM with GTX 1050 Ti in the server works fine. When I add more GPUs to the server (without even using them - neither for the console, nor for a VM), that one VM throws video artifacts (mainly in browsers) and eventually hangs. I tried a lot of things, and I completely out of ideas. Please, if you have any thoughs on the subject, you are very welcome. P.S. The XML file. bus='0x05' slot='0x00' - GTX 1050 Ti, bus='0x00' slot='0x19' - 2nd onboard NIC, bus='0x09' slot='0x00' - PCI-e USB controller card. GTX 1050Ti is in its own IOMMU with its HDMI Audio, the NIC and the USB controller have their own dedicated IOMMU groups too. <domain type='kvm'> <name>Windows 10</name> <uuid>cd0b4afc-6c16-4299-a4e9-2b892e810eac</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='5'/> <vcpupin vcpu='4' cpuset='8'/> <vcpupin vcpu='5' cpuset='9'/> <vcpupin vcpu='6' cpuset='10'/> <vcpupin vcpu='7' cpuset='11'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-2.7'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/cd0b4afc-6c16-4299-a4e9-2b892e810eac_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='4' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/cache/domains/Windows 10/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows.10.x86-x64.Ver1607+LTSB+-Office2016.24in1.by.SmokieBlahBlah.16.08.16.iso'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.126-2.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:e2:b6:74'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/Temp/vbios.dump'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x19' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </memballoon> </devices> </domain>