joecool169

Members
  • Posts

    91
  • Joined

  • Last visited

Everything posted by joecool169

  1. joeserver-diagnostics-20210416-1753.zip I've been trying to figure this out. Recently my Arch VM gpu passthrough has quit working. A while back perhaps with the update to 6.9 I began to notice that when I start my Windows VM I no longer see the bios screen or loading screen on the monitor. This is strange because I used to see these things. Now the first thing I see is the Windows 10 login screen. I think this could be related to my Arch passthrough not working anymore. Windows vm works fine, no complaints. Shut it down, start Arch and I just get a blank screen. Created an Ubuntu vm, also blank screen. Windows vm starts right back up and works fine. I've tried lots if things, but if you have an idea throw it out and I'll try it.
  2. I fixed this issue yesterday. I fought the problem the entire day. It turned out to be a bad USB flash drive. I'm not sure when my most recent backup was or where I put the backup. So I created a new flash drive with the Unraid USB Creator. I then copied the config folder from the old drive to the new one. Everything started up and worked fine. What made the issue most confusing was that I thought the problem was somehow something I had caused. After troubleshooting for the entire day I think the flash drive was my issue right from the get go.
  3. Yesterday I used ACS override to split my 10gbe network card. I wanted to pass one port through to a pfsense vm. That didn't work out. So before I switched the acs override off I decided to uncheck all the boxes in the system devices section. I don't know why I tried this but it seemed like a good idea at the time. So I unchecked all the boxes and then clicked reboot. Since then my system will not get an ip address. Well it gets a 169.254 address. When I try to boot the gui I just get a blinking cursor at the top left of my monitor. I tried deleting network.cfg off the usb stick but that did not help. I tried connecting all three network ports to my switch in case unraid was using a different one and that didn't help. I also tried connection a seperate switch and my laptop and assigning myself an address in the 169.254 range to connect. All attempts to connect fail. tower-diagnostics-20210328-1455.zip
  4. Got mine working but I'm not sure how. I followed the video guide exactly, spent hours re-checking everything, re-watched the video several times. Tried loads of stuff, different ports, proxynet, everything I could think of. After 2 nights of troubleshooting I put it all back to the way it was in the video and it worked. Normally after I go through something like this I can look back and figure out what I did wrong, not this time. I spent 6+ hours troubleshooting went back to where I started and everything worked.
  5. Me too, If I change Guacamole to the proxynet that I setup in previous tutorials I can then access the webui remotely, but I don't have connection to the vm that I setup to control anymore.
  6. Well after having tried everything I was finally able to upgrade the video card in my gaming rig. So I popped the GTX1070 I retired from there into my Unraid server. The Arch Linux VM immediately booted up with GPU passthrough working. Windows still does not work, but I fiddled with it so much I am guessing I may need to delete my xml and start over.
  7. Well after having tried everything I was finally able to upgrade the video card in my gaming rig. So I popped the GTX1070 I retired from there into my Unraid server. The Arch Linux VM immediately booted up with GPU passthrough working. Windows still does not work, but I fiddled with it so much I am guessing I may need to delete my xml and start over.
  8. Well if you read that thread entirely someone from unraid did tell me it was an AMD issue. I've tried 3 video cards.
  9. Hi everyone, My current setup is: Ryzen 2600X Asus PRIME X470-PRO 32GB DDR4 3200 Ram NVIDIA 8800 GTX LSI SAS 9211-8i Broadcom 10Gbe network adapter After putting forth much effort I am unable to successfully and reliably pass-through my gpu. My question is if I upgraded to an x570 board would I have a better chance? Or should I just be thinking about Intel?
  10. Well I happen to have an Intel I7 8700K, EVGA Z370FTW with a GTX 1070 sitting on my desk that is my main PC. Part of the reason for all of this passthrough stuff was an attempt to see if I could be happy with my main machine being Unraid and a VM. If I were to put all my hard drives into the Intel machine can I expect that things will run more smoothly? I really do not want to be working on it all the time.
  11. Well after spending most of the weekend working on this I can pass the card through and everything works until I need to restart the Windows 10 vm. After selecting restart the monitor attached to the server goes black, if I login with spashtop desktop I can see that it goes back to code 43. I could use some help, feel like I'm just talking to myself mostly. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>UR Gamer</name> <uuid>a20b3e5d-85af-0d96-37c2-d8b226341b42</uuid> <description>Unraid Server</description> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='10'/> <vcpupin vcpu='2' cpuset='5'/> <vcpupin vcpu='3' cpuset='11'/> <emulatorpin cpuset='0,6'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/a20b3e5d-85af-0d96-37c2-d8b226341b42_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='1234567890ab'/> </hyperv> <kvm> <hidden state='on'/> </kvm> <ioapic driver='kvm'/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='2' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Windows.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.173-2.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/UR Gamer/vdisk1.img'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:01:93:0e'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='2'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/Downloads/rom/evga.rom'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x0e8f'/> <product id='0x00a8'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <memballoon model='none'/> </devices> </domain>
  12. Since the last post I installed an NVIDIA 8800GTX in place of the Radeon card, but when I try Arch I get the same results. I tried a fresh vm of Arch, but all Arch Vm's refuse to boot the install media if the video card is passed through UPDATE: I plugged a dedicated monitor into the server. As it turns out the screen does not go blank when I attempt the passthrough, I was switching inputs on my monitor and seeing a blank screen not realizing that the Unraid screen goes blank after a few minutes, spacebar quickly brings the command prompt back. The monitor attached to the server never changes, so I assume the passthrough is never working at all. But the passthrough attempt does make any vm unusable. SECOND UPDATE: I have passed the card through to a windows 10 VM but now I'm getting code 43 in windows.
  13. So I have tried off and on for months to pass-through my Radeon HD 6850 to any vm without success. Kinda frustrating. I tried 3 windows 10 vm's, mac osx, and my latest attempt is Arch Linux. I pass through the card and I get an error : no suitable video mode found. As soon as I attempt to pass the card through to any vm the monitor attached to the card loses signal and signal never comes back unless I reboot. So my only way of trying to login to the vm is through my Fedora install with the vm manager. Any idea how I can get this to work or what info I need to provide to troubleshoot? I login to the vm with nomachine because I am trying to learn more about Linux, but the graphics are very laggy. I assume the lag might be resolved if I could pass this video card through.
  14. Well from what I read in these forums I think the network errors are probably nothing to worry about. Can anyone recommend which model of intel 10gbe nic I might buy with no fan. Looking for one for unraid and one for windows.
  15. One last reply to this thread just to let you know that after confirming with you that the pool was setup correctly and configuring nzbget to use this raid 0 test as /intermediate directory my downloads go much smoother, esp the unpacking part, and io wait times are very low, usually less than 5%. Thanks again for your help.
  16. First of all many thanks to the community for the awesome work in these forums. I have had several questions in the past few weeks and for the most part the community has solved my problems. My new server has had a few bugs, and I've made some errors, but I just try to work through them one at a time. I bought a couple 10gbe Broadcom cards off ebay, one in unraid server, the other in my Windows 10 desktop. Connected with a microtik switch. I'm getting these errors in netdata: net_drops.eth0 interface inbound dropped packets in the last 10 minutes role: sysadmin net_fifo.eth0 interface fifo errors in the last 10 minutes role: sysadmin In Windows (on my main desktop, not unraid vm) I sometimes get this error: This device is disabled because the firmware of the device did not give it the required resources. (Code 29) A reboot of windows fixes the issue, sometimes for a week, sometimes for a minute. Do you guys think these are fixable? Or should I shop for a couple intel nics? I'm tempted to order a couple used intel nics but I am not certain which model would be best. Having at least 2 ports is nice, 10gbe is what I want, and I would prefer no cooling fan. (the fans on the Broadcom nics are quite noisy)
  17. I don't think it was fine when I first set it up, but idk. I guess I was pretty flustered at this point so maybe I just read it wrong. I know the green light next to sbd kept going out like the disk had spun down. I have all my disks set to never spin down. And that is an ssd, so I'm pretty sure it was never spinning to begin with. But thank you so much for your quick replies, I'm sorry if I wasted your time. I'm slowly getting all this figured out, 1 thing at a time.
  18. Overall: Device size: 670.71GiB Device allocated: 353.06GiB Device unallocated: 317.65GiB Device missing: 0.00B Used: 350.32GiB Free (estimated): 319.05GiB (min: 160.22GiB) Data ratio: 1.00 Metadata ratio: 2.00 Global reserve: 366.73MiB (used: 0.00B) Data Metadata System Id Path RAID0 RAID1 RAID1 Unallocated -- --------- --------- --------- -------- ----------- 1 /dev/sdb1 117.00GiB 1.00GiB 32.00MiB 105.54GiB 2 /dev/sdd1 117.00GiB - 32.00MiB 106.54GiB 3 /dev/sde1 117.00GiB 1.00GiB - 105.57GiB -- --------- --------- --------- -------- ----------- Total 351.00GiB 1.00GiB 32.00MiB 317.65GiB Used 349.60GiB 368.33MiB 48.00KiB
  19. I setup RAID 0 with 3 unassigned ssd's. All seemed to go well. I tried to set nzbget to use the RAID 0 array for an intermediate directory, so I assigned another path in the docker /intermediate for the container path and /mnt/disks/test for the host path. In the nzbget settings I changed the intermediate path to /intermediate. Not only does nzbget seem to ignore my changes but one of the drives was in the RAID and now appears to not be in the pool anymore, with this error in the system log: kernel: BTRFS error (device sdb1): devid 2 uuid fa0f6263-8da8-4ecc-b400-2de8fdc56c6e is missing Have I done something wrong? The actual issue that I have been trying to fix for days is high iowait time. The longer nzbget runs the worse it gets until eventually the server is super laggy and cpu usage very high. If I throttle the downloads back it seems to help, the fasted sustained speed I can run the downloads at is about 100mbit per second. At that rate iowait is about 31% according to netdata. Binhex nzbget container
  20. Thank you very much. Where I was going wrong was not picking the MAC address in the interface rules. I thought because I picked the mac address that my router showed in it's history that I was picking the right one. But the MAC address in the routers history was the MAC for the onboard lan that I have never plugged a cable into. So to recap, I turned bonding off, picked the right MAC address in interface rules and rebooted. I then had to re setup my reservation and port forwards in my router and all is good. So the point of all of this was that I wanted to try out pfsense. With my current setup I could now right? Plug modem into onboard or other 10gbe port and switch stays plugged into current port?
  21. It gives that error the very first time you try to start the vm after a server reboot. As a test I disables auto start on all dockers and vm's and even the array. After reboot I brought the array online and then that vm and that vm only. Same error. What else can I try?
  22. Right, but that leaves the connection bonded. Is bonded the default setting? How do I get them not bonded? Or should they be?