Jump to content

dimes007

Members
  • Content Count

    94
  • Joined

  • Last visited

Community Reputation

1 Neutral

About dimes007

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. So with the 970 EVO I'm having the same issues I think. First stopped after 22 days. Rebooted. Now again after ~36 days. This time rebooted and it didn't even come back online. Maybe I didn't cut power long enough for everything to reset. Anybody else having issues in general with NVME in unRaid or is just me? 8700k, Gigabyte Z370xp SLI.
  2. Thanks. Board is a Gigabyte Z370xp SLI. It's been a fine board. Ok. I ordered a 970 EVO. It's on the way. Fingers crossed. Thanks, --dimes
  3. I've got an Adata XPG SX8200 Pro pci x4 NVME drive XFS formatted and mounted with unassigned device. It goes offline and "missing" in unassigned deviced on occasion. Last time I had 22 days of uptime. This time 9 days. Relevant System Log pasted below. This has been happening on and off for months. It was my cache but I moved that to an old SATA ssd so at least when the Adata goes offline my machine can stay up. PCIe ACS Override setting is enabled. I'm unsure of what to do to fix it and am also unsure if I should try a different PCI x4 SSD or I should just get a SATA SSD. Any help or suggestions appreciated. Thanks, --dimes Sep 13 10:31:52 pi kernel: nvme nvme0: I/O 336 QID 3 timeout, aborting Sep 13 10:31:52 pi kernel: nvme nvme0: I/O 378 QID 3 timeout, aborting Sep 13 10:31:52 pi kernel: nvme nvme0: I/O 427 QID 3 timeout, aborting Sep 13 10:31:52 pi kernel: nvme nvme0: I/O 428 QID 3 timeout, aborting Sep 13 10:31:52 pi kernel: nvme nvme0: I/O 430 QID 3 timeout, aborting Sep 13 10:32:22 pi kernel: nvme nvme0: I/O 336 QID 3 timeout, reset controller Sep 13 10:32:52 pi kernel: nvme nvme0: I/O 22 QID 0 timeout, reset controller Sep 13 10:34:23 pi kernel: nvme nvme0: Device not ready; aborting reset Sep 13 10:34:23 pi kernel: nvme nvme0: Abort status: 0x7 Sep 13 10:34:23 pi kernel: nvme nvme0: Abort status: 0x7 Sep 13 10:34:23 pi kernel: nvme nvme0: Abort status: 0x7 Sep 13 10:34:23 pi kernel: nvme nvme0: Abort status: 0x7 Sep 13 10:34:23 pi kernel: nvme nvme0: Abort status: 0x7 Sep 13 10:35:23 pi kernel: nvme nvme0: Device not ready; aborting reset Sep 13 10:35:23 pi kernel: nvme nvme0: Removing after probe failure status: -19 Sep 13 10:36:24 pi kernel: nvme nvme0: Device not ready; aborting reset Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): metadata I/O error in "xlog_iodone" at daddr 0x1dd59842 len 64 error 5 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612895568 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 170104 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612895056 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612894544 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612893520 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612893008 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612892496 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612891984 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612891472 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612889424 Sep 13 10:36:24 pi kernel: print_req_error: I/O error, dev nvme0n1, sector 612888912 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 399096 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 557851984 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 543126416 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 400728 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 596713920 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 596713912 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 401544 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 596713864 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): writeback error on sector 695560 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): metadata I/O error in "xfs_buf_iodone_callback_error" at daddr 0x2483d170 len 8 error 5 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): xfs_do_force_shutdown(0x2) called from line 1271 of file fs/xfs/xfs_log.c. Return address = 000000004d7dcc22 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Log I/O Error Detected. Shutting down filesystem Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1e50b5f0. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Please umount the filesystem and rectify the problem(s) Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): metadata I/O error in "xlog_iodone" at daddr 0x1dd597ba len 64 error 5 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1df9e870. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): xfs_do_force_shutdown(0x2) called from line 1271 of file fs/xfs/xfs_log.c. Return address = 000000004d7dcc22 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1df9e878. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x21101ba0. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1ee265d8. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1de0afb8. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): metadata I/O error in "xlog_iodone" at daddr 0x1dd597fa len 64 error 5 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1de0b338. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x20a47e88. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1df157f0. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): Failing async write on buffer block 0x1df14530. Retrying async write. Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): xfs_do_force_shutdown(0x2) called from line 1271 of file fs/xfs/xfs_log.c. Return address = 000000004d7dcc22 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): metadata I/O error in "xlog_iodone" at daddr 0x1dd5983a len 64 error 5 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): xfs_do_force_shutdown(0x2) called from line 1271 of file fs/xfs/xfs_log.c. Return address = 000000004d7dcc22 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): metadata I/O error in "xlog_iodone" at daddr 0x1dd59840 len 64 error 5 Sep 13 10:36:24 pi kernel: XFS (nvme0n1p1): xfs_do_force_shutdown(0x2) called from line 1271 of file fs/xfs/xfs_log.c. Return address = 000000004d7dcc22 Sep 13 10:36:24 pi kernel: nvme nvme0: failed to set APST feature (-19) Sep 13 10:37:28 pi kernel: qemu-system-x86[32027]: segfault at 4 ip 00005635a59e04d7 sp 0000150abbafee98 error 6 in qemu-system-x86_64[5635a57b1000+4cc000] Sep 13 10:37:28 pi kernel: Code: 48 89 c3 e8 3b f3 ff ff 48 83 c4 08 48 89 d8 5b 5d c3 90 c3 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 8b 87 98 09 00 00 <01> 70 04 c3 0f 1f 44 00 00 c3 66 66 2e 0f 1f 84 00 00 00 00 00 0f Sep 13 10:37:28 pi kernel: br0: port 4(vnet4) entered disabled state Sep 13 10:37:28 pi kernel: device vnet4 left promiscuous mode Sep 13 10:37:28 pi kernel: br0: port 4(vnet4) entered disabled state
  4. I have a new disk which i was looking to encrypt and preclear to add the precleared encrypted drive to array while maintaining parity (which may be flawed in and of itself). Ignoring that for the moment. Following the above: I disabled vm and docker (tabs gone). Removed cache drive (which was btrfs). (I've only ever had one spot) Added new disk as cache. Started array (was given no choice of format but disk settings had xfs encrypted as default). Formatted new cache drive. New drive formatted as btrfs and not xfs encrypted despite default setting under "disk settings" disk.cfg shows: cacheId="WDC_WD60EFRX-XXXXXXX_WD-WX17DF8C595A" (This is the new disk) cacheFsType="btrfs" Thanks for any help you can lend.
  5. Thanks for the advise. I unplugged the physical LAN NIC and went for it. So the LAN nic in pfsense is now vtnet0 (br0) passed from unRAID. as of now still using virtio but pfsense hasn't had any trouble seeing it on boots. WAN nic is still the physical x1 intel nic passed through. DHCP is working on LAN through virtio. To be clear I'm passing unraid br0 through to pfsense. I'm not passing br0.XX for tagged packets because I don't really want separate virtual nics in pfsense. my vlans are already defined in pfsense. I want all br0 traffic, even tagged packets to get to pfsense on the same virtio interface but maybe what I'm trying to do isn't possible with unraid implementation of vlans and I need to pass each vlan as a different nic to pfsense.
  6. Hey Grid. First of all thanks for all the videos. I watched the first pfsense sense video but ventured out on my own before the 2nd was released. I'll check it out now. This weekend past I had my first taste of pfsense and VLANs (in general I'm good with unRAID, unifi and VMs). After about 3 days of effort between premise wiring, pfsenseVM configuration, netgear switch, unraid VLANs and unifi controller (in a docker no less) things are going well. My setup is as follows: PFSense has the two physical NICs passed each with 1 port. 1. WAN from cable modem. 2. Original SSID and my existing items still on 192.168.147.1/24 LAN. Other interfaces are: 3. VLAN10 is at 10.10.10.1/24. It has its own SSID as well as a guest SSID with a captive portal through the unifi controller. 4. Virtual interface is one of the virtual bridges in unRAID but as of now IS NOT USED in PFSense. Now that things work and are settled down the remaining question for anybody is one of efficiency/optimization. The physical LAN connection to PFSense has my main LAN untagged and VLAN10 tagged. The physical LAN connection to unRAID has my main LAN untagged and VLAN10 tagged. You see where this is going... I can save a switch port, gain a PCI x1 slot back and maybe gain some speed if I eliminate the physical LAN NIC and pass through the VM unraid br0 (or maybe BOTH unraid br0 and br0.10) to pfsense. I would think the virtual 10gig network is hella fast. Am I asking for trouble here? Again, this is my first experience with VLans and my first experience with pfSense so I'm not sure if I should just leave well enough alone. What do ya'll think? Thanks, --dimes
  7. Everytime CA Backup stopping and starting dockers affects nextcloud I google "unraid hulk smash" to get back here to fix it. I may never remember how to get a shell prompt in my docker and then do # mysqld --tc-heuristic-recover commit I may never forget: "hulk smash" Thanks chaosratt.
  8. Yes. "Disk Encrypted and Unlocked". Things seem fine. Moving data to it now. I thought I was going to get a new icon in the settings tab? Maybe that was only when it was in 6.4 beta? Maybe it only shows up if you use a keyfile? --dimes
  9. I reformatted encrypted and things seem fine. Not worried about forensic recovery. The 5tb was a parity disk. I'll be moving data from a SED disk currently in the array to this disk. I haven't seen much here about performance impact (either CPU or disk speed) with encryption turned on. Not that I need blazing speed but SED has no tangible performance hit while using an encrypted array drive. The only remaining curiosity is that I don't have an encryption icon in settings?? Thank you both for the quick and accurate guidance. --dimes
  10. I have an empty 5gb drive unencrypted and am looking to move it to encrypted. I've got "clear-me" directory ready and was about to run the clear script but it seems unnecessary. I think I'm correct that shrink array procedure (including clearing) THEN add disk back with format xfs-encrypted maintains parity but as a format updates parity the clear seems like an unnecessary step. As the formatting of the disk will update parity the clear script hardly seems necessary. Is the only way to maintain parity to clear => shrink => add or is there a way to avoid the clear? Thanks, --dimes
  11. On the egg they're basically same price. +2 Sata good but is it "ULTRA DURABLE"? lolz. --- Video pass through worked without issue: BIOS change to integrated graphics as primary. ACS override was turned on because two of the cards were in same IOMMU group. I passed through the AMD RX460 to a separate Windows 10 vms (Seabios). Still fiddling with the other two video cards to pass through and some usb controllers next but I'm thrilled with how this is going thus far.
  12. The marketing nonsense talks about it being "ultra durable" but really - it was the cheapest board with 3 pcie x16 (physical) slots on the ibuypower deal i was using. I ended up with 8700k, 8gb RAM, that MB, rx580-8gb, 240gb ssd, 2tb hdd, decent thermaltake case for $1151 shipped. The cpu, graphics card, mb, ssd, hdd and ram part out to about $1200. So build & burn in, led light strip nonsense, mousepad, mechanical keyboard, shipping, etc. for -$50.
  13. I tried to make a Ryzen 7 1700x work but could never get GPU PT to work and returned the machine before the window ran up. First impressions with 8700k on unRAID 6.3.5 - IT WORKS! Cinebench results in a Windows 10 VM using all cores was near bare metal as to be expected. (Max of 1515 on bare metal, Max of 1494 in VM) These scores are with "Multi Core Enhancement" Enabled but no other overclocking done. Note: that Windows 10 setup using OVMF Bios left me at the splash screen for an uncomfortably long time but eventually it does work come back and let you install. Perhaps it's like that on other hardware and I just don't recall the setup. It's time to move onto pass through: I've got the 3 video cards piled in as well as the IGP which I'll use for the console. Motherboard is a Gigabyte Z370XP-SLI. --SLD
  14. This page has some tips: https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF "Using huge pages for guest and bigger huge page size(e.g. 1 GB) could reduce periodical micro-freezes of whole VM introduced by disabled NPT. If periodical stuttering still occurs try removing smep feature from vCPU..." Nonetheless, I still haven't gotten a GPU to pass through yet. Thanks for the advice above. Will try ASAP.
  15. Is the change between NPT on or off transparent to the guest? If I setup a new ryzen environment and get it working when and if a fix is released to allow NPT and GPU passthrough can I just change the settings or will Windows guests have issues and need to be rebuilt. I have to decide if I'm going to keep my new Ryzen box or not. ;( Thanks.