Starlord

Members
  • Posts

    31
  • Joined

  • Last visited

Everything posted by Starlord

  1. Thanks! I just... pasted the features tab from my windows 10 XML and lo and behold it worked. Manjaro loaded proprietary driver fine and I'm typing this post from it now. Damn I hate nvidia sometimes. Thanks for pointing me in the right direction though. The hyper-v specific terminology tripped me up.
  2. Same story on Pop_OS as well. No GPU output once the nividia proprietary driver loads. Nothing in the VM log Here's my XML <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Pop_OS! 20.04 Beta</name> <uuid>5f1bc17e-aead-362c-554d-802b75490dd1</uuid> <description>Tim&apos;s Linux Workstation</description> <metadata> <vmtemplate xmlns="unraid" name="Ubuntu" icon="ubuntu.png" os="ubuntu"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>16</vcpu> <cputune> <vcpupin vcpu='0' cpuset='8'/> <vcpupin vcpu='1' cpuset='40'/> <vcpupin vcpu='2' cpuset='9'/> <vcpupin vcpu='3' cpuset='41'/> <vcpupin vcpu='4' cpuset='10'/> <vcpupin vcpu='5' cpuset='42'/> <vcpupin vcpu='6' cpuset='11'/> <vcpupin vcpu='7' cpuset='43'/> <vcpupin vcpu='8' cpuset='12'/> <vcpupin vcpu='9' cpuset='44'/> <vcpupin vcpu='10' cpuset='13'/> <vcpupin vcpu='11' cpuset='45'/> <vcpupin vcpu='12' cpuset='14'/> <vcpupin vcpu='13' cpuset='46'/> <vcpupin vcpu='14' cpuset='15'/> <vcpupin vcpu='15' cpuset='47'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/5f1bc17e-aead-362c-554d-802b75490dd1_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='8' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Pop_OS! 20.04 Beta/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/pop-os_20.04_amd64_nvidia_4.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='pci' index='8' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='8' port='0xf'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/> </controller> <controller type='pci' index='9' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='9' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:10:5a:85'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x49' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/isos/1080ti.rom'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x49' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0e' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x0d' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x4a' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc52b'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <memballoon model='none'/> </devices> </domain>
  3. Is there a code 43 type issue with the Linux nvidia drivers as well? Nouveau works fine, I get display can finish GUI install, play videos, general use etc but the proprietary drivers have better gaming performance. Once I install the proprietary driver though, I just get a black screen once I get past bootloader. GPU is a 1080ti in the primary slot, vbios dumped and passed though. I'm trying to install on Manjaro but I'm downloading POP_OS right now since the live installer has the driver baked in to see if that works at all. Just wondering if I need to add the vendor ID To the XML or something with linux guests as well (since hyper-v is windows specific anyways I assumed adding that to the XML would be pointless from the get go)
  4. Oof that's what I was afraid of. The out of tree driver has made using the rocket 750 outside of unraid nearly impossible and I was worried that may be the issue here. I emailed them about it with no response ages ago.. good to know you guys havent had any luck either. I'll be avoiding RocketRaid products like the plague from now on. IMO that's unacceptable. Thankfully I ordered 2 LSI controllers similar to the ones in the new storinators that came in today. I'll be replacing them this afternoon. Will report back with results.
  5. Second post is the only relevant bits I was able to spot in the syslog not seeing anything else out of the ordinary.
  6. Yeah it's currently on 6.6.7. Was not updated to 6.7. It's been solid for a good 2 years
  7. BIOS is up to date as of last night, I'll try re-seating the SAS cables into the rocket but the drives themselves are attached to a back plane and I've tried moving slots already. I'll run a ram test too, and since this machine is rack mounted in its own climate controlled closet I dont think bumping/moving is an issue. Issue is, this is a production server that handles a site to site VPN so I gotta be selective when bringing it down. Might be a few hours before I report back
  8. Oh my bad, the system in question is different from sig. It's a 45 drive storinator, dual xeon, real fancy. It's a clients machine not "mine" Need to edit personal rig in sig too it's out of date. Will do so now
  9. Dont tell me my Rocket 750's biting the dust...
  10. I'm pulling my hair out at this point. Diagnostics.zip attatched So had a disk start throwing smart errors. No big deal, followed the wiki and swapped it. Everything seemed happy until about half way through parity rebuild when suddenly my parity drive starts reporting read errors.. so I followed the wiki on parity swap procedure and this time added in 2 parity drives. Ran the rebuild again, considering the original failed drive an acceptable loss and was prepared to move on. It "finished" but then dropped disk 1 and disk 5 from the array and said they had a bunch of read errors. So now im in panic mode, I bring the array offline and boot it in maintenance mode. Ran xfs_repair -v /dev/sdx on both disks, both disks reported no secondary super block could be found after several hours of just dots scrolling through terminal. So I started the array minus the 2 trouble disks, and attempted to mount disk1 partition 1 manually. I was able to browse files on the disk, and started copying them to disk 2. It got about 3/4 of the way through before MC reported an input/output error and now I can't manually browse to disk 2 ``` root@TPC-Abraham:/mnt/disk2# ls /bin/ls: cannot open directory '.': Input/output error ``` and the webgui reports 2018 read errors on disk.. im running out of disks to move stuff too here.. currently working on a solution to back what data I have left to the cloud. tpc-abraham-diagnostics-20191007-1318_1.zip
  11. I didn't read it in a way that seemed harsh to me so no worries! Loading games from a network share has been "fine" for me but some clients like Battle.net wont even let you install games on a mapped drive. Tried symlinking too but the client still detected it was a network share and refused to install my games. If I used the sata drives to store my game drive I'd have to move my media library to the NVME drives which seems even more pointless to me lol What I ended up doing is just re-balancing the raid for the cache. This gives me my 3tb image for windows games (outside of future parity) with 1tb for normal cache stuff left over. I think im happy with this for now. Will mark solved.
  12. Well I'd like to have the option to mount the image in my Linux vm as proton gets better and better. I work in IT and I dont currently have a girlfriend so I do alright but no not that rich lol. I just happened to stumble on an insane deal on most of these drives. The 4x intel NVME ssds and the 2 adatas were grabbed at the same time from a local shop that closed but I got them all on the last day for $190. I've had that Samsung for 2-3 years. My rationale for storing my game library on those NVMEs is pretty simple. My games library hasn't grown in over a year and is only gonna grow by whatever Cyberpunk uses. So for the most part it's read only other than updates etc and I have a whole 3/4 of a tarrabyte left after I install EVERY game I own. I dont see myself filling that up in the next 5 years unless like... Cyberpunk 2 comes out and is 750gb. I dont plan on picking up more NVME drives other than 1 for pop_os and it's not going to be used by unraid. Sata SSDS are cheap enough now I can pick up 1-2 a pay period to grow the sata array for my constantly expanding media library. My main OS's are already on NVME drives so why not benefit from shorter loading times in games due to the increased r/w over sata ssds
  13. Hello! My problem isn't really a technical one so much as a planning one. I already have unraid doing everything I want/need I just want to make sure I'm using my setup to its fullest. So I have a bit of a different setup. It's all ssd and its a mixture of 7 1tb pcie nvme ssds of varying brands (intel, adata, and 1 samsung) and 4 sata ssds (All sandisk). All are 1tb. The end goal here is I'd like to give unraid all of the sata disks because I plan on adding 4-12 more in the near future. I've got a nice fat docker stack that'll run on it as well. The samsung is already passed through and is the boot disk for my Windows VM. Next NVME I find a good deal on is going to be dedicated to a Pop_OS VM. And then my homelab vms are just simple ones that can run on images on the sata array. What I'd like to do, is take my 4 intel NVME ssds and essentially store a 3-4tb image that I store my entire game library on and mount it in my windows VM only. Problem is, when I assign them as cache drives it seems to stripe on 2 and mirror on 2 reducing the pool size to just 2tb. I know I can change the raidlevel via terminal with btrfs but is that the best solution to achieve a single mountable disk image across 4 of my fastest drives? I don't really care about parity just yet. I only have 10gb total data I cant afford to lose and that's already backed up in half a dozen other places. But in the future I'd love to run a FOG server to make incremental image level backups of all of my virtual machines but I need to grow my array first. Any suggestions? I've attached my hardware profile. And I can start fresh if need be with an entire new config. rasputin.xml
  14. Disk 6 was initialized into the array but did not have user data on it. But you are correct. That's exactly how he did the disk swap.
  15. I took a look at his system. Here's the diagnostics zip. He re-assigned every drive with a new config when he installed the new parity drive. I pointed docker at his old image which is now located in what unraid previously saw as disk 1, but is now disk 2, and docker straight up refused to start. Didn't have time to help him further today (I know OP IRL) tower-diagnostics-20190129-1923.zip Here's the log from when I tried to start docker when it was pointed at the old image. Jan 29 19:29:40 Tower emhttpd: shcmd (34312): /usr/local/sbin/mount_image '/mnt/disk2/system/docker/docker.img' /var/lib/docker 60 Jan 29 19:29:41 Tower kernel: BTRFS: device fsid 5932acc8-3d13-469c-ac36-2392ce4dfcc1 devid 1 transid 5090890 /dev/loop3 Jan 29 19:29:41 Tower kernel: BTRFS info (device loop3): disk space caching is enabled Jan 29 19:29:41 Tower kernel: BTRFS info (device loop3): has skinny extents Jan 29 19:29:41 Tower kernel: BTRFS error (device loop3): bad tree block start 0 21993553920 Jan 29 19:29:41 Tower kernel: BTRFS error (device loop3): bad tree block start 0 21993553920 Jan 29 19:29:41 Tower kernel: BTRFS warning (device loop3): failed to read root (objectid=4): -5 Jan 29 19:29:41 Tower kernel: BTRFS error (device loop3): open_ctree failed Jan 29 19:29:41 Tower root: mount: /var/lib/docker: wrong fs type, bad option, bad superblock on /dev/loop3, missing codepage or helper program, or other error. Jan 29 19:29:41 Tower root: mount error Jan 29 19:29:41 Tower emhttpd: shcmd (34312): exit status: 1
  16. I've used my Rocket 750 with unraid since 2016 and I've had zero isues with it. In fact, unraid is the only distro I'm able to get it working at all in.
  17. Forgot to update: With zenstates disables all issues still persist.
  18. So I was able to get this working sending and receiving mail (static ip, ptr record set by my isp, all ports forwarded and working) but I'm having issues getting this working with my nginx reverse proxy.. keep getting a 502 error Here's my proxy conf server { listen 443 ssl; listen [::]:443 ssl; server_name mail.*; include /config/nginx/ssl.conf; client_max_body_size 0; # enable for ldap auth, fill in ldap details in ldap.conf #include /config/nginx/ldap.conf; location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /login; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_mail mail; proxy_pass http://$upstream_mail:4433; } } and here's the container setup
  19. Yeah I saw that post. Miss my rolling release but getting my rocket 750 working in anything but unraid has been unfruitful
  20. I'll take a look at this now Edit: Oh I need to patch qemu in unraid? Oh boy. I'll start looking into that too. I really wish unraid was just a paid menu I could install on top of my distro of choice for this exact reason lol
  21. I'll disable it then see if it resolves any issues. I also just noticed my cpu fan randomly ramps up then immediately back down
  22. I picked up a rocket 750 second hand ages ago but haven't been able to get the driver to install in Arch since 4.8. Unraid seems to have no issues running the card and is on a much newer kernel. I always get errors looking for some file in /etc/init.d which doesnt exist. When enabling the systemd service for it, it also fails citing the same missing files in /etc/init.d Anyone else gotten this working? If I get it figured out I'll make a pkgbuild and throw it on the aur
  23. Oh boy. New upgrade new set of issues. First thing's first, cpus are pinned and isolated but vms are still super slow. My vega64 vm still suffers from reset bug but the RX580 vm is sloooowwww Fix Common Issues reported I need to enable zenstates, I did so following the instructions (editing go file) but after several reboots it still reports zenstates aren't enabled Also have an error that states ```Your CPU is running constantly at 100% and will not throttle down when it's idle (to save heat / power). This is because there is currently no CPU Scaling Driver Installed. Seek assistance on the unRaid forums with this issue``` also /var/log keeps filling up Only the vega64 reset issue was with my 1950x. Any guidance would be appreciated! Docker is working great so that's nice and unraid is usable for me for now but I'd really like my linux workstation and windows gaming pc back so I can stop using my laptop lol rasputin-diagnostics-20190106-1106.zip