dimes007

Members
  • Posts

    115
  • Joined

  • Last visited

Everything posted by dimes007

  1. Sure. What we both did: All the P cores 0-11 isolated for a windows daily driver. You assign those isolated cores to the windows daily driver. Performance in VM is good so you think all is well. But one day when the VM wasn't running I noticed what I didn't think to check: Core 0 IS NOT isolated. Go stop your VM and do something in unraid or dockers and notice if only Cores 1-11 remain at 0%. For me, despite isolcpus set, Core 0 had activity. Please verify my results. Regards.
  2. I isolated cpus 0-11 (all 6 P cores) on my 12600k. In reality it didn't work. With all vms off core 0 had plenty of activity. I guess its 5 core pairs isolated for me. Maybe i'll force emulatorpin on 0,1.
  3. The TLDR is that with ASUS motherboard you can't use legacy boot along with both discrete AND integrated GPU at the same time. Do you guys have that issue? is it a weird 12th Gen Intel thing or just a silly programming decision in Asus bios. == longer version in order to keep igpu active when discrete gpu is plugged in you have to go into bios without igpu installed. You can then turn on Multi-Monitor to Enabled. Assign the initial gpu to the iGPU. Then shutdown and install discrete and it'll show up. But doing so disables CSM and along with that anyway to boot legacy. I guess uefi boot for unraid isn't a big deal but i've never used it.
  4. You ever try passing through the iGPU to a Windows VM? I've only ever used CPU transcoding in Plex. In this new setup, by my estimation the 4 E-cores give about a 7000 passmark and can do 3 x 1080p streams simultaneously. I rarely need more than 1. Regards.
  5. And I have an ASUS Prime z690-p d4 coming today. You'd think we could've standardized or just let one of us be the guinea pig. Hopefully they all do great.
  6. I'm in your camp. I came to the conclusion that afaict, other than GPU issues, we'll be fine. My intent is to run RC3, pass through a discrete GPU and let the CPU transcode plex until the support is fully there on 12600k.
  7. You ever turn on overrides and see if you can pass through the Bus 2 USB to a VM? I was able to do this once on a MB with an NEC usb 3 controller but it was probably separate iommu groups. Regards.
  8. True. I can't be the only one. and I'll admit, i didn't read all 132 pages.
  9. I don't mean to thread hijack but how do ya'll get LSTOPO / HWLOC to work? The spaceinvader script links in his youtube video are dead and neither are in ca apps.
  10. Oh yeah. I wasn't worried about the speed of the E cores. I'm afraid that isolating core 0 would somehow mess up the OS or that unraid would still start some base processes on core 0 because it has to. I should have been clearer.
  11. I'm with you: I typically isolate higher cores for vms and core 0 and its hyperthreaded pair are for unraid (at minimum). It was my intention to let unraid and dockers have all of the E cores and use all P cores for VMs. Looking at some posts here it looks like the P cores are at the start at 0. Can I isolcpu cores 0-5 [and their pairs in a 12600k] and let unraid use ONLY the E cores meaning it won't have core 0 anymore. Thanks.
  12. Squid: Thanks for all of the amazing work on these. Can I throw in a feature request to have System Autofan log which fan it is adjusting? If you want to go further allowing us to add friendly names for each fan such as: "CPU Fan", "Top Exhaust", etc. Thanks.
  13. I could never find in my system documentation information about which slots were wired to which socket. Using your tutorial 3 video cards come up node 0. Kind of a bummer. So now do I isolate all but 2 cores on node 0 and see if most of the system will run on node 1 with all or most of the cores? Seems crazy to have something like plex cross numa_nodes but if the os is numa aware maybe it'll head to node 1 and stay there. Another general question I'm unsure of is what is best practices for emulatorpin cpuset? Thanks.
  14. I'm in the BIOS now looking around and googling anything not obvious: Limit CPUID Maximum, Local Apic Mode, Intel TXT(LT-SX) Configuration, Intel I/OAT... FOUND IT. Maybe it'll help someone else. Devices => North Bridge => Numa => Disabled
  15. Thanks for this!!! Was looking to use some winter break down time to tune vms. I already reserve and pin core pairs for vms. But I don't strict reserve VM memory in the pinned socket or make sure passed through video cards were going to vms that are pinned to the socket the video card is wired too.... But I can't even get out of the gate. root@mu:~# numactl -H available: 1 nodes (0) For some reason Unraid sees 1 NUMA node. I have two Xeon E5-2660 v2. Is this to be expected in this old hardware? Thanks.
  16. Since I resurrected an old thread: Issue was the script failed on: > modprobe -r kvm_intel since I always have pfsense running. Shut down all vm's and reran scripts and all is well.
  17. Since I resurrected an old thread: Issue was the script failed on: > modprobe -r kvm_intel since I always have pfsense running. Shut down all vm's and reran scripts and all is well.
  18. Trying to do the same for houseparty on android. Any luck?
  19. Trying to do this today to get houseparty app android version running in my Windows 10 VM. Did you ever get this going? I followed space invader video to enable nested vms. I can run bluestack but still get a warning about my hardware not supporting vms.
  20. So with the 970 EVO I'm having the same issues I think. First stopped after 22 days. Rebooted. Now again after ~36 days. This time rebooted and it didn't even come back online. Maybe I didn't cut power long enough for everything to reset. Anybody else having issues in general with NVME in unRaid or is just me? 8700k, Gigabyte Z370xp SLI.
  21. Thanks. Board is a Gigabyte Z370xp SLI. It's been a fine board. Ok. I ordered a 970 EVO. It's on the way. Fingers crossed. Thanks, --dimes
  22. I've got an Adata XPG SX8200 Pro pci x4 NVME drive XFS formatted and mounted with unassigned device. It goes offline and "missing" in unassigned deviced on occasion. Last time I had 22 days of uptime. This time 9 days. Relevant System Log pasted below. This has been happening on and off for months. It was my cache but I moved that to an old SATA ssd so at least when the Adata goes offline my machine can stay up. PCIe ACS Override setting is enabled. I'm unsure of what to do to fix it and am also unsure if I should try a different PCI x4 SSD or I should just get a SATA SSD. Any help or suggestions appreciated. Thanks, --dimes Sep 13 10:31:52 pi kernel: nvme nvme0: I/O 336 QID 3 timeout, aborting Sep 13 10:31:52 pi kernel: nvme nvme0: I/O 378 QID 3 timeout, aborting Sep 13 10:31:52 pi kernel: nvme nvme0: I/O 427 QID 3 timeout, aborting Sep 13 10:31:52 pi kernel: nvme nvme0: I/O 428 QID 3 timeout, aborting Sep 13 10:31:52 pi kernel: nvme nvme0: I/O 430 QID 3 timeout, aborting Sep 13 10:32:22 pi kernel: nvme nvme0: I/O 336 QID 3 timeout, reset controller Sep 13 10:32:52 pi kernel: nvme nvme0: I/O 22 QID 0 timeout, reset controller Sep 13 10:34:23 pi kernel: nvme nvme0: Device not ready; aborting reset Sep 13 10:34:23 pi kernel: nvme nvme0: Abort status: 0x7 Sep 13 10:34:23 pi kernel: nvme nvme0: Abort status: 0x7 Sep 13 10:34:23 pi kernel: nvme nvme0: Abort status: 0x7 Sep 13 10:34:23 pi kernel: nvme nvme0: Abort status: 0x7 Sep 13 10:34:23 pi kernel: nvme nvme0: Abort status: 0x7 Sep 13 10:35:23 pi kernel: nvme nvme0: Device not ready; aborting reset Sep 13 10:35:23 pi kernel: nvme nvme0: Removing after probe failure status: -19 Sep 13 10:36:24 pi kernel: nvme nvme0: Device not ready; aborting reset Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): metadata I/O error in "xlog_iodone" at daddr 0x1dd59842 len 64 error 5 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612895568 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 170104 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612895056 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612894544 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612893520 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612893008 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612892496 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612891984 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612891472 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612889424 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612888912 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 399096 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 557851984 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 543126416 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 400728 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 596713920 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 596713912 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 401544 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 596713864 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 695560 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): metadata I/O error in "xfs_buf_iodone_callback_error" at daddr 0x2483d170 len 8 error 5 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): xfs_do_force_shutdown(0x2) called from line 1271 of file fs/xfs/xfs_log.c. Return address = 000000004d7dcc22 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Log I/O Error Detected. Shutting down filesystem Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1e50b5f0. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Please umount the filesystem and rectify the problem(s) Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): metadata I/O error in "xlog_iodone" at daddr 0x1dd597ba len 64 error 5 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1df9e870. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): xfs_do_force_shutdown(0x2) called from line 1271 of file fs/xfs/xfs_log.c. Return address = 000000004d7dcc22 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1df9e878. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x21101ba0. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1ee265d8. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1de0afb8. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): metadata I/O error in "xlog_iodone" at daddr 0x1dd597fa len 64 error 5 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1de0b338. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x20a47e88. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1df157f0. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1df14530. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): xfs_do_force_shutdown(0x2) called from line 1271 of file fs/xfs/xfs_log.c. Return address = 000000004d7dcc22 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): metadata I/O error in "xlog_iodone" at daddr 0x1dd5983a len 64 error 5 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): xfs_do_force_shutdown(0x2) called from line 1271 of file fs/xfs/xfs_log.c. Return address = 000000004d7dcc22 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): metadata I/O error in "xlog_iodone" at daddr 0x1dd59840 len 64 error 5 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): xfs_do_force_shutdown(0x2) called from line 1271 of file fs/xfs/xfs_log.c. Return address = 000000004d7dcc22 Sep 13 10:36:24 pi kernel: nvme nvme0: failed to set APST feature (-19) Sep 13 10:37:28 pi kernel: qemu-system-x86[32027]: segfault at 4 ip 00005635a59e04d7 sp 0000150abbafee98 error 6 in qemu-system-x86_64[5635a57b1000+4cc000] Sep 13 10:37:28 pi kernel: Code: 48 89 c3 e8 3b f3 ff ff 48 83 c4 08 48 89 d8 5b 5d c3 90 c3 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 8b 87 98 09 00 00 <01> 70 04 c3 0f 1f 44 00 00 c3 66 66 2e 0f 1f 84 00 00 00 00 00 0f Sep 13 10:37:28 pi kernel: br0: port 4(vnet4) entered disabled state Sep 13 10:37:28 pi kernel: device vnet4 left promiscuous mode Sep 13 10:37:28 pi kernel: br0: port 4(vnet4) entered disabled state
  23. I have a new disk which i was looking to encrypt and preclear to add the precleared encrypted drive to array while maintaining parity (which may be flawed in and of itself). Ignoring that for the moment. Following the above: I disabled vm and docker (tabs gone). Removed cache drive (which was btrfs). (I've only ever had one spot) Added new disk as cache. Started array (was given no choice of format but disk settings had xfs encrypted as default). Formatted new cache drive. New drive formatted as btrfs and not xfs encrypted despite default setting under "disk settings" disk.cfg shows: cacheId="WDC_WD60EFRX-XXXXXXX_WD-WX17DF8C595A" (This is the new disk) cacheFsType="btrfs" Thanks for any help you can lend.