dimes007

Members
  • Posts

    102
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

dimes007's Achievements

Apprentice

Apprentice (3/14)

2

Reputation

  1. I could never find in my system documentation information about which slots were wired to which socket. Using your tutorial 3 video cards come up node 0. Kind of a bummer. So now do I isolate all but 2 cores on node 0 and see if most of the system will run on node 1 with all or most of the cores? Seems crazy to have something like plex cross numa_nodes but if the os is numa aware maybe it'll head to node 1 and stay there. Another general question I'm unsure of is what is best practices for emulatorpin cpuset? Thanks.
  2. I'm in the BIOS now looking around and googling anything not obvious: Limit CPUID Maximum, Local Apic Mode, Intel TXT(LT-SX) Configuration, Intel I/OAT... FOUND IT. Maybe it'll help someone else. Devices => North Bridge => Numa => Disabled
  3. Thanks for this!!! Was looking to use some winter break down time to tune vms. I already reserve and pin core pairs for vms. But I don't strict reserve VM memory in the pinned socket or make sure passed through video cards were going to vms that are pinned to the socket the video card is wired too.... But I can't even get out of the gate. root@mu:~# numactl -H available: 1 nodes (0) For some reason Unraid sees 1 NUMA node. I have two Xeon E5-2660 v2. Is this to be expected in this old hardware? Thanks.
  4. Since I resurrected an old thread: Issue was the script failed on: > modprobe -r kvm_intel since I always have pfsense running. Shut down all vm's and reran scripts and all is well.
  5. Since I resurrected an old thread: Issue was the script failed on: > modprobe -r kvm_intel since I always have pfsense running. Shut down all vm's and reran scripts and all is well.
  6. Trying to do the same for houseparty on android. Any luck?
  7. Trying to do this today to get houseparty app android version running in my Windows 10 VM. Did you ever get this going? I followed space invader video to enable nested vms. I can run bluestack but still get a warning about my hardware not supporting vms.
  8. So with the 970 EVO I'm having the same issues I think. First stopped after 22 days. Rebooted. Now again after ~36 days. This time rebooted and it didn't even come back online. Maybe I didn't cut power long enough for everything to reset. Anybody else having issues in general with NVME in unRaid or is just me? 8700k, Gigabyte Z370xp SLI.
  9. Thanks. Board is a Gigabyte Z370xp SLI. It's been a fine board. Ok. I ordered a 970 EVO. It's on the way. Fingers crossed. Thanks, --dimes
  10. I've got an Adata XPG SX8200 Pro pci x4 NVME drive XFS formatted and mounted with unassigned device. It goes offline and "missing" in unassigned deviced on occasion. Last time I had 22 days of uptime. This time 9 days. Relevant System Log pasted below. This has been happening on and off for months. It was my cache but I moved that to an old SATA ssd so at least when the Adata goes offline my machine can stay up. PCIe ACS Override setting is enabled. I'm unsure of what to do to fix it and am also unsure if I should try a different PCI x4 SSD or I should just get a SATA SSD. Any help or suggestions appreciated. Thanks, --dimes Sep 13 10:31:52 pi kernel: nvme nvme0: I/O 336 QID 3 timeout, aborting Sep 13 10:31:52 pi kernel: nvme nvme0: I/O 378 QID 3 timeout, aborting Sep 13 10:31:52 pi kernel: nvme nvme0: I/O 427 QID 3 timeout, aborting Sep 13 10:31:52 pi kernel: nvme nvme0: I/O 428 QID 3 timeout, aborting Sep 13 10:31:52 pi kernel: nvme nvme0: I/O 430 QID 3 timeout, aborting Sep 13 10:32:22 pi kernel: nvme nvme0: I/O 336 QID 3 timeout, reset controller Sep 13 10:32:52 pi kernel: nvme nvme0: I/O 22 QID 0 timeout, reset controller Sep 13 10:34:23 pi kernel: nvme nvme0: Device not ready; aborting reset Sep 13 10:34:23 pi kernel: nvme nvme0: Abort status: 0x7 Sep 13 10:34:23 pi kernel: nvme nvme0: Abort status: 0x7 Sep 13 10:34:23 pi kernel: nvme nvme0: Abort status: 0x7 Sep 13 10:34:23 pi kernel: nvme nvme0: Abort status: 0x7 Sep 13 10:34:23 pi kernel: nvme nvme0: Abort status: 0x7 Sep 13 10:35:23 pi kernel: nvme nvme0: Device not ready; aborting reset Sep 13 10:35:23 pi kernel: nvme nvme0: Removing after probe failure status: -19 Sep 13 10:36:24 pi kernel: nvme nvme0: Device not ready; aborting reset Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): metadata I/O error in "xlog_iodone" at daddr 0x1dd59842 len 64 error 5 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612895568 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 170104 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612895056 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612894544 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612893520 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612893008 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612892496 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612891984 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612891472 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612889424 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612888912 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 399096 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 557851984 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 543126416 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 400728 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 596713920 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 596713912 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 401544 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 596713864 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 695560 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): metadata I/O error in "xfs_buf_iodone_callback_error" at daddr 0x2483d170 len 8 error 5 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): xfs_do_force_shutdown(0x2) called from line 1271 of file fs/xfs/xfs_log.c. Return address = 000000004d7dcc22 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Log I/O Error Detected. Shutting down filesystem Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1e50b5f0. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Please umount the filesystem and rectify the problem(s) Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): metadata I/O error in "xlog_iodone" at daddr 0x1dd597ba len 64 error 5 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1df9e870. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): xfs_do_force_shutdown(0x2) called from line 1271 of file fs/xfs/xfs_log.c. Return address = 000000004d7dcc22 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1df9e878. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x21101ba0. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1ee265d8. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1de0afb8. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): metadata I/O error in "xlog_iodone" at daddr 0x1dd597fa len 64 error 5 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1de0b338. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x20a47e88. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1df157f0. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1df14530. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): xfs_do_force_shutdown(0x2) called from line 1271 of file fs/xfs/xfs_log.c. Return address = 000000004d7dcc22 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): metadata I/O error in "xlog_iodone" at daddr 0x1dd5983a len 64 error 5 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): xfs_do_force_shutdown(0x2) called from line 1271 of file fs/xfs/xfs_log.c. Return address = 000000004d7dcc22 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): metadata I/O error in "xlog_iodone" at daddr 0x1dd59840 len 64 error 5 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): xfs_do_force_shutdown(0x2) called from line 1271 of file fs/xfs/xfs_log.c. Return address = 000000004d7dcc22 Sep 13 10:36:24 pi kernel: nvme nvme0: failed to set APST feature (-19) Sep 13 10:37:28 pi kernel: qemu-system-x86[32027]: segfault at 4 ip 00005635a59e04d7 sp 0000150abbafee98 error 6 in qemu-system-x86_64[5635a57b1000+4cc000] Sep 13 10:37:28 pi kernel: Code: 48 89 c3 e8 3b f3 ff ff 48 83 c4 08 48 89 d8 5b 5d c3 90 c3 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 8b 87 98 09 00 00 <01> 70 04 c3 0f 1f 44 00 00 c3 66 66 2e 0f 1f 84 00 00 00 00 00 0f Sep 13 10:37:28 pi kernel: br0: port 4(vnet4) entered disabled state Sep 13 10:37:28 pi kernel: device vnet4 left promiscuous mode Sep 13 10:37:28 pi kernel: br0: port 4(vnet4) entered disabled state
  11. I have a new disk which i was looking to encrypt and preclear to add the precleared encrypted drive to array while maintaining parity (which may be flawed in and of itself). Ignoring that for the moment. Following the above: I disabled vm and docker (tabs gone). Removed cache drive (which was btrfs). (I've only ever had one spot) Added new disk as cache. Started array (was given no choice of format but disk settings had xfs encrypted as default). Formatted new cache drive. New drive formatted as btrfs and not xfs encrypted despite default setting under "disk settings" disk.cfg shows: cacheId="WDC_WD60EFRX-XXXXXXX_WD-WX17DF8C595A" (This is the new disk) cacheFsType="btrfs" Thanks for any help you can lend.
  12. Thanks for the advise. I unplugged the physical LAN NIC and went for it. So the LAN nic in pfsense is now vtnet0 (br0) passed from unRAID. as of now still using virtio but pfsense hasn't had any trouble seeing it on boots. WAN nic is still the physical x1 intel nic passed through. DHCP is working on LAN through virtio. To be clear I'm passing unraid br0 through to pfsense. I'm not passing br0.XX for tagged packets because I don't really want separate virtual nics in pfsense. my vlans are already defined in pfsense. I want all br0 traffic, even tagged packets to get to pfsense on the same virtio interface but maybe what I'm trying to do isn't possible with unraid implementation of vlans and I need to pass each vlan as a different nic to pfsense.
  13. Hey Grid. First of all thanks for all the videos. I watched the first pfsense sense video but ventured out on my own before the 2nd was released. I'll check it out now. This weekend past I had my first taste of pfsense and VLANs (in general I'm good with unRAID, unifi and VMs). After about 3 days of effort between premise wiring, pfsenseVM configuration, netgear switch, unraid VLANs and unifi controller (in a docker no less) things are going well. My setup is as follows: PFSense has the two physical NICs passed each with 1 port. 1. WAN from cable modem. 2. Original SSID and my existing items still on 192.168.147.1/24 LAN. Other interfaces are: 3. VLAN10 is at 10.10.10.1/24. It has its own SSID as well as a guest SSID with a captive portal through the unifi controller. 4. Virtual interface is one of the virtual bridges in unRAID but as of now IS NOT USED in PFSense. Now that things work and are settled down the remaining question for anybody is one of efficiency/optimization. The physical LAN connection to PFSense has my main LAN untagged and VLAN10 tagged. The physical LAN connection to unRAID has my main LAN untagged and VLAN10 tagged. You see where this is going... I can save a switch port, gain a PCI x1 slot back and maybe gain some speed if I eliminate the physical LAN NIC and pass through the VM unraid br0 (or maybe BOTH unraid br0 and br0.10) to pfsense. I would think the virtual 10gig network is hella fast. Am I asking for trouble here? Again, this is my first experience with VLans and my first experience with pfSense so I'm not sure if I should just leave well enough alone. What do ya'll think? Thanks, --dimes
  14. Everytime CA Backup stopping and starting dockers affects nextcloud I google "unraid hulk smash" to get back here to fix it. I may never remember how to get a shell prompt in my docker and then do # mysqld --tc-heuristic-recover commit I may never forget: "hulk smash" Thanks chaosratt.