Jump to content

TheSkaz

Members
  • Posts

    148
  • Joined

  • Last visited

Everything posted by TheSkaz

  1. so far seabios has caused the windows install to take 12 hours or so.... I also just realized that I was installing the RTM version of windows 10.... that could have been an issue.
  2. This is what I got: This device is not working properly because Windows cannot load the drivers required for this device. (Code 31) ill try seabios (as per your link)
  3. I have a virtual machine that I am trying to pass one of my GPUs to. this GPU is the main one that unraid uses when it boots. the other GPU is used for plex. I have downloaded and modified the ROM from techpowerup and am utilizing it. This is how it presents in windows: here is my relevant part of the VM Config: when attempting to install nvidia drivers through windows, I get this: and when I try through nvidia: and if I follow those instructions: Windows Version:
  4. Yep, I built everything. I have pics of the build process if anyone is interested
  5. Table with computer shelf removed Sent from my SM-N960U using Tapatalk
  6. 1st Build Unraid 6.8.2 Pro Asus Z9PE16/2DL 2x Intel Xeon E5-2697 V2s (2.7GHz 12core) 256GB DDR3 LR-DIMM RAM 10x 2TB WD Red/WD Green/Hitachi drives 6x 240GB SSDs (ZFS raid array for VMs) 2x 120GB SSDs (for cache drive) 2x EVGA 1070s (that happened to be watercooled...) these pictures were in the original case. I took the 2080 ti out and put it my main machine, and swapped to the 2x 1070s. after that, I decided to upgrade the case. I dont have a picture of it, but I had a thermaltake tower 900 case with 2 waterlines coming over to this one to cool the 1070s. The case will become a desk PC... essentially.... the PC on the right is the UnRaid Server. OK, that was the first build. the lower processor's watercooling line worked loose as I moved this 350lb thing around and it got the cpu and mobo wet. POP!!! because all of my VMs are on there (for work) I needed to scramble and get a new motherboard that day. they dont stock those anymore lol, which leads me to server #2. Asus TRX40-Pro Prime AMD Threadripper 3990x 256GB DDR4 RAM 10x Seagate Exo Enterprise 16Tb Drives 6x 240GB SSDs (My VMs) 2x NVME 1Tb Drives for Cache. 10G ASUS NIC 2x 1070s Marvell Dual SFF-8087 controller card.......whoops I ended up swapping out that marvell card for a supermicro one which with the 8 onboard ports and 2x M.2 slots, that gives me my 18 drives again. d due to the slot spacing on the board, I had to put the HBA in upside down.
  7. thank you. I might just start over using that method. I would rather wipe everything and just start over. when I started I messed a few things up and just would like to start clean
  8. follow up question, is there a way to completely wipe everything and start over with just my product key?
  9. I upgraded my dual xeon board to a threadripper (3990x). I had to add a HBA controller and messed up and got a Marvell that has the virtualization issue. I can boot up normally but as soon as I hit the web app, the whole system freezes. I cannot even log in locally. I booted into safe mode, and got the diagnostics. virtualization is disabled until I get a new card (i need my VMs that are on the drives that are connected to that controller) tower-diagnostics-20200729-0904.zip
  10. root@Tower:~# dmesg | grep ZFS [ 61.092654] ZFS: Loaded module v0.8.0-1, ZFS pool version 5000, ZFS filesystem version 5 Thank you so much for your help and quick turnarounds.
  11. thank you so much! sorry for the headache
  12. What I did was upload the files using WinSCP: and then rebooted, once it comes back up, it shows this: it seems to be reverting. I assume I don't need to rename the files right?
  13. Thank you so much!!!!!!! [ 88.290629] ZFS: Loaded module v0.8.3-1, ZFS pool version 5000, ZFS filesystem version 5 I have it installed, and currently stress testing. Let's hope this works!
  14. I previously posted about a kernel panic under heavy load, and it seems this was addressed 6 days ago: https://github.com/openzfs/zfs/pull/10148 Is there a way that we can get this implemented, or know of a workaround?
  15. Anyone getting ZFS Kernel Panics? seems to only happen when nzbget is running full tilt. here is my error Apr 2 01:11:48 Tower kernel: PANIC: zfs: accessing past end of object e26/543cf (size=6656 access=6308+1033) Apr 2 01:11:48 Tower kernel: Showing stack for process 25214 Apr 2 01:11:48 Tower kernel: CPU: 2 PID: 25214 Comm: nzbget Tainted: P O 4.19.107-Unraid #1 Apr 2 01:11:48 Tower kernel: Hardware name: ASUSTeK COMPUTER INC. Z9PE-D16 Series/Z9PE-D16 Series, BIOS 5601 06/11/2015 Apr 2 01:11:48 Tower kernel: Call Trace: Apr 2 01:11:48 Tower kernel: dump_stack+0x67/0x83 Apr 2 01:11:48 Tower kernel: vcmn_err+0x8b/0xd4 [spl] Apr 2 01:11:48 Tower kernel: ? spl_kmem_alloc+0xc9/0xfa [spl] Apr 2 01:11:48 Tower kernel: ? _cond_resched+0x1b/0x1e Apr 2 01:11:48 Tower kernel: ? mutex_lock+0xa/0x25 Apr 2 01:11:48 Tower kernel: ? dbuf_find+0x130/0x14c [zfs] Apr 2 01:11:48 Tower kernel: ? _cond_resched+0x1b/0x1e Apr 2 01:11:48 Tower kernel: ? mutex_lock+0xa/0x25 Apr 2 01:11:48 Tower kernel: ? arc_buf_access+0x69/0x1f4 [zfs] Apr 2 01:11:48 Tower kernel: ? _cond_resched+0x1b/0x1e Apr 2 01:11:48 Tower kernel: zfs_panic_recover+0x67/0x7e [zfs] Apr 2 01:11:48 Tower kernel: ? spl_kmem_zalloc+0xd4/0x107 [spl] Apr 2 01:11:48 Tower kernel: dmu_buf_hold_array_by_dnode+0x92/0x3b6 [zfs] Apr 2 01:11:48 Tower kernel: dmu_write_uio_dnode+0x46/0x11d [zfs] Apr 2 01:11:48 Tower kernel: ? txg_rele_to_quiesce+0x24/0x32 [zfs] Apr 2 01:11:48 Tower kernel: dmu_write_uio_dbuf+0x48/0x5e [zfs] Apr 2 01:11:48 Tower kernel: zfs_write+0x6a3/0xbe8 [zfs] Apr 2 01:11:48 Tower kernel: zpl_write_common_iovec+0xae/0xef [zfs] Apr 2 01:11:48 Tower kernel: zpl_iter_write+0xdc/0x10d [zfs] Apr 2 01:11:48 Tower kernel: do_iter_readv_writev+0x110/0x146 Apr 2 01:11:48 Tower kernel: do_iter_write+0x86/0x15c Apr 2 01:11:48 Tower kernel: vfs_writev+0x90/0xe2 Apr 2 01:11:48 Tower kernel: ? list_lru_add+0x63/0x13a Apr 2 01:11:48 Tower kernel: ? vfs_ioctl+0x19/0x26 Apr 2 01:11:48 Tower kernel: ? do_vfs_ioctl+0x533/0x55d Apr 2 01:11:48 Tower kernel: ? syscall_trace_enter+0x163/0x1aa Apr 2 01:11:48 Tower kernel: do_writev+0x6b/0xe2 Apr 2 01:11:48 Tower kernel: do_syscall_64+0x57/0xf2 Apr 2 01:11:48 Tower kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9 Apr 2 01:11:48 Tower kernel: RIP: 0033:0x14c478acbf90 Apr 2 01:11:48 Tower kernel: Code: 89 74 24 10 48 89 e5 48 89 04 24 49 29 c6 48 89 54 24 18 4c 89 74 24 08 49 01 d6 48 63 7b 78 49 63 d7 4c 89 e8 48 89 ee 0f 05 <48> 89 c7 e8 1b 85 fd ff 49 39 c6 75 19 48 8b 43 58 48 8b 53 60 48 Apr 2 01:11:48 Tower kernel: RSP: 002b:000014c478347640 EFLAGS: 00000216 ORIG_RAX: 0000000000000014 Apr 2 01:11:48 Tower kernel: RAX: ffffffffffffffda RBX: 0000558040d4e920 RCX: 000014c478acbf90 Apr 2 01:11:48 Tower kernel: RDX: 0000000000000002 RSI: 000014c478347640 RDI: 0000000000000005 Apr 2 01:11:48 Tower kernel: RBP: 000014c478347640 R08: 0000000000000001 R09: 000014c478b15873 Apr 2 01:11:48 Tower kernel: R10: 0000000000000006 R11: 0000000000000216 R12: 000000000000000b Apr 2 01:11:48 Tower kernel: R13: 0000000000000014 R14: 0000000000000409 R15: 0000000000000002
  16. oh boy... i have 1GB to the house and intended on using this to get as full throughput as possible. I have a RAX120 and it is unstable, but can reach 970Mbps. Ill do some tests, but i already downloaded opnsense just in case.
  17. Im strying to get my first docker img working. I noticed in the logs that it shows: 2020-03-05 10:57:32,169 INFO exited: plexmediaserver (terminated by SIGABRT; not expected) 2020-03-05 10:57:32,169 DEBG received SIGCHLD indicating a child quit 2020-03-05 10:57:35,173 INFO spawned: 'plexmediaserver' with pid 76 2020-03-05 10:57:35,182 DEBG 'plexmediaserver' stderr output: mkdir: cannot create directory ‘/mnt/user’: Permission denied 2020-03-05 10:57:35,203 DEBG 'plexmediaserver' stderr output: terminate called after throwing an instance of 'std::runtime_error' what(): Codecs: Initialize: 'boost::filesystem::temp_directory_path: Not a directory: "/mnt/user/appdata/plex/transcode/"' 2020-03-05 10:57:35,255 DEBG 'plexmediaserver' stderr output: ****** PLEX MEDIA SERVER CRASHED, CRASH REPORT WRITTEN: /config/Plex Media Server/Crash Reports/1.18.8.2468-5d395aa9d/PLEX MEDIA SERVER/751a2361-ee5e-9748-2765ab4f-00314952.dmp 2020-03-05 10:57:35,266 DEBG 'plexmediaserver' stderr output: Error in command line:the argument for option '--serverUuid' should follow immediately after the equal sign Crash Uploader options (all are required): 2020-03-05 10:57:35,266 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 22492446595584 for <Subprocess at 22492446700736 with name plexmediaserver in state STARTING> (stdout)> 2020-03-05 10:57:35,266 DEBG 'plexmediaserver' stderr output: --directory arg Directory to scan for crash reports --serverUuid arg UUID of the server that crashed --userId arg User that owns this product --platform arg Platform string --platformVersion arg Platform version string --vendor arg Vendor string --device arg Device string --model arg Device model string --sentryUrl arg Sentry URL to upload to --sentryKey arg Sentry Key for the project --version arg Version of the product 2020-03-05 10:57:35,266 INFO exited: plexmediaserver (terminated by SIGABRT; not expected) 2020-03-05 10:57:35,266 DEBG received SIGCHLD indicating a child quit 2020-03-05 10:57:35,266 INFO gave up: plexmediaserver entered FATAL state, too many start retries too quickly and the only thing I have figured out is to chown nobody:users /mnt/user (the directory already exists) is there something I am missing?
  18. Ok, I got it working. setting the VM to use Q35-2.9 worked. (not sure if unsafe interrupts helped)
  19. Ill make a note of that, Id rather see if I can get this working first. I added allow_usafe_interrupts: kernel /bzimage append vfio-pci.ids=8086:150e,10de:1e07,10de:10f7,10de:1ad6,10de:1ad7 vfio_iommu_type1.allow_unsafe_interrupts=1 initrd=/bzroot and got 1 of the 4 nics to show up: and here is the log from the vm: ErrorWarningSystemArrayLogin -nodefaults \ -chardev socket,id=charmonitor,fd=35,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=utc,driftfix=slew \ -global kvm-pit.lost_tick_policy=delay \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \ -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \ -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \ -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \ -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \ -device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 \ -device pcie-root-port,port=0x16,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x6 \ -device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x7.0x7 \ -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pcie.0,multifunction=on,addr=0x7 \ -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pcie.0,multifunction=on,addr=0x7.0x1 \ -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pcie.0,addr=0x7.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \ -blockdev '{"driver":"file","filename":"/mnt/sdd/vms/Router (PfSense)/vdisk1.img","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"qcow2","file":"libvirt-1-storage","backing":null}' \ -device ide-hd,bus=ide.2,drive=libvirt-1-format,id=sata0-0-2,bootindex=1,write-cache=on \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=37,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=1 \ -vnc 0.0.0.0:0,websocket=5700 \ -k en-us \ -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pcie.0,addr=0x1 \ -device vfio-pci,host=0000:04:00.0,id=hostdev0,bus=pci.1,addr=0x0 \ -device vfio-pci,host=0000:04:00.1,id=hostdev1,bus=pci.3,addr=0x0 \ -device vfio-pci,host=0000:04:00.2,id=hostdev2,bus=pci.4,addr=0x0 \ -device vfio-pci,host=0000:04:00.3,id=hostdev3,bus=pci.5,addr=0x0 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2020-03-03 23:14:15.141+0000: Domain id=4 is tainted: high-privileges 2020-03-03 23:14:15.141+0000: Domain id=4 is tainted: host-cpu char device redirected to /dev/pts/1 (label charserial0)
  20. First post. I am evaluating unraid for my needs and created a pfsense vm. I got pfsense installed but it shows no interfaces. Is there something glaring that I am missing? info: Version 6.8.2 lspci shows this: 04:00.0 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01) 04:00.1 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01) 04:00.2 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01) 04:00.3 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01) my boot cfg looks like this: kernel /bzimage append vfio-pci.ids=8086:150e,10de:1e07,10de:10f7,10de:1ad6,10de:1ad7 initrd=/bzroot 8086:150e is the 4 port Intel NIC (the rest are nvidia 2080ti) Here is the config: and in XML form (only hostdev part): <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x2'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x3'/> </source> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> and finally from pfsense:
×
×
  • Create New...