Jump to content

johnsanc

Members
  • Content Count

    205
  • Joined

  • Last visited

Community Reputation

1 Neutral

About johnsanc

  • Rank
    Advanced Member
  • Birthday 03/14/1984

Converted

  • Gender
    Male
  • Location
    Charlotte, NC

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hmmm... I'm going to have to dig deeper into my dockers. When the Docker service is disabled, I can clear the ARP table and I get the correct MAC address. If I I start the docker service, clear the ARP table again, then I get the weird random MAC address. EDIT: Ok the MAC address I am seeing in pfSense when I turn on the Docker service is the one from "shim-br0". I assume this is because I have "Host access to custom networks" enabled in my Docker settings. I believe I only needed this for Pi-Hole, but now I replaced that with pfBlockerNG. Once I disabled the custom networks I saw my correct MAC address in pfsense.
  2. I recently added a pfSense box to my network and assigned some static IP addresses. I see in my pfSense logs a ton of entries like this: arp: xx:xx:xx:xx:xx:xx attempts to modify permanent entry for 192.168.y.yyy on igb1 It looks like something is changing my Unraid MAC address to the address represented by xxx's above. I thought maybe it was my pihole docker, but I deleted that since I no longer need it. Upon each reboot I get a different random and unique MAC address for Unraid. Any ideas what could be causing this?
  3. Forgot to followup but I got it working. I turned my old router to an AP and setup a pfSense box. I forgot to add the static route in pfSense. Once I added it things worked perfectly.
  4. I tried adding my router IP and 8.8.8.8 to the Peer DNS Server and it did not allow me to access anything aside from my LAN when using "Remote Tunneled Access". Any idea what the issue could be? EDIT: Apparently if I use NAT then I can access the internet using Remote Tunneled Access. Is there a way to make that work without the NAT setting set to "Yes" ?
  5. I recently upgraded my parity drives to 12TB. For now, my largest data drives are 10TB. I noticed that during a parity check, the check progresses past the 10TB mark and processes the full 12TB even through there is no data to check against. Why does this happen? Wouldn't it be more efficient to stop after the last bit of data from the data drives? No support needed, just a general question I was pondering.
  6. Just following up to confirm that upgrading to 6.9-beta1 seemed to have fixed the issue. Thank you all for your help and guidance as always.
  7. I am really struggling with this one and must have read though this entire thread 3 times now. Here is what I have so far: Local server uses NAT: No Local endpoint: my external IP : 51820 Peer type of access: Remote tunneled access All local tunnel/peer settings are defaults My docker config is set to allow host access to custom networks The docker IPv4 custom network I have uses the same subnet I forwarded port 51820 to my unraid server internal IP I added a static route in my router: Destination IP: 10.253.0.0 IP Subnet Mask: 255.255.255.0 Gateway IP: unraid internal ip address Metric: 2 (No idea what this is for and Netgear's help is not helpful - supposedly this is supposed to be the number of routers on the network?) Now, when I try to ping 10.253.0.1 with the command line it works: PING 10.253.0.1 (10.253.0.1): 56 data bytes 64 bytes from 10.253.0.1: icmp_seq=0 ttl=64 time=1.303 ms 64 bytes from 10.253.0.1: icmp_seq=1 ttl=64 time=2.949 ms 64 bytes from 10.253.0.1: icmp_seq=2 ttl=64 time=2.096 ms 64 bytes from 10.253.0.1: icmp_seq=3 ttl=64 time=2.886 ms 64 bytes from 10.253.0.1: icmp_seq=4 ttl=64 time=3.213 ms 64 bytes from 10.253.0.1: icmp_seq=5 ttl=64 time=2.095 ms When I try to ping 10.253.0.2 I get "Destination Host Unreachable" errors but I can also see that the errors show that the Redirect Host is going to my unraid server IP. I tried connecting with both my iPhone and the macOS WireGuard app and both show the 5 second timeout handshake error. Anyone have any suggestions? I feel like I have to be missing something obvious. EDIT: I completely forgot about my piece of hot garbage AT&T Pace gateway for my fiber connection. Since AT&T's firmware update broke DMZ+ mode a year ago (still not fixed) I had most ports opened to my Netgear router... but the range ended at 50999 since AT&T has a few service ports reserved above that. I changed my Wiregaurd port to something in range of what I forwarded and it worked without a hitch. However, how do I access both my LAN and the internet at the same time on the VPN? Do I need to select a different "peer type of access"? EDIT2: - Remote tunneled access = LAN access + no interwebs on device I'm using to VPN in - Remote access to LAN = LAN access + interwebs
  8. Yep I'm going to kick off another parity check to make sure there's zero errors. Its not an Unraid problem per-se, but doesn't the behavior above indicate that Unraid does not re-read the sync correction it just made to ensure its valid? If not it would be nice to have a "Parity Check with Validation" option.
  9. Another quick update: My parity check started firing off corrections at about the 9.25 TB mark which is right about where I started getting the IO_PAGE_FAULT error the other day during my parity check. So, after this ordeal I am left with a couple of takeaways: Its possible for Unraid to write bad parity and there is nothing in the Web UI that would indicate anything went wrong unless you look at the syslog. The "bad parity writing" issue starts with the lovely AMD IO_PAGE_FAULT error. In my case there were a few XFS errors after this and my log was not flooded... but the parity was indeed incorrect for every sector after that point. So, although I think I have recovered from this, its a bit concerning that this is apparently a scenario that can write bad parity without the user knowing. This could leave someone with a completely unprotected array and they would not even know it until their next parity check.
  10. I do have a SAS+Expander as well. LSI LSI00301 (9207-8i) + Intel RES2SV240NC Interesting about the memory - It is ECC and straight from my motherboard's QVL for RAM. Since I just upgraded to v6.9beta-1 I will let this parity check complete and monitor the logs for any more similar errors before I attempt to change any other settings.
  11. How do you know if an xfs check is "good" (with or without -n)? I don't see any kind of exit code in the syslog. ive attached the output of the xfs checks for the two disks using -vv (without the -n) xfs_check.txt
  12. I stopped the remainder of the check, upgraded to 6.9-b1, rebooted, did an XFS check on disks9 and 10. Nothing seemed to indicate any issues as far as I can tell. I am now running another correcting parity check. Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 1 - agno = 8 - agno = 13 - agno = 15 - agno = 4 - agno = 0 - agno = 2 - agno = 7 - agno = 6 - agno = 10 - agno = 11 - agno = 12 - agno = 14 - agno = 3 - agno = 17 - agno = 19 - agno = 16 - agno = 18 - agno = 5 - agno = 9 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting.
  13. Thanks @johnnie.black - Once this check completes I may try to upgrade to v6.9-b1, do the XFS checks/repairs, then do another parity check. Do you think that is a good next step or would you recommend something else?
  14. Well here is an update so far... The data disks are done with the parity check, but its currently checking nothing because my parities are 12 TB and my largest data drive is 10 TB. It looks like I had an IO_PAGE_FAULT error and then a few minutes later some XFS meta data errors, first on disk10 (which was still parity checking) then later on disk9 (which was already done checking). I can still access those disks and they are not emulated. Looking back at my old logs, this happened before the last time I got XFS errors in the log. In all cases the IO_PAGE_FAULT came from my "ASM1062 Serial ATA Controller" which is onboard. Also not sure if related, but I noticed in the logs that the XFS issues appeared shortly after 5:00 AM in both this run on 6/5 and on 6/2 (within one minute). So should I continue with another check? Or should I try to do an XFS repair on the two disks that have issues? Try to copy data and reformat those drives? Something else to try to fix whatever the controller issue is? Upgrade to 6.9 beta for better support for X570? Any guidance on next steps is appreciated as always. UPDATE: This seems very much related to issues I was having before. X570 woes. I also noticed that I forgot to add "iommu=pt avic=1" to my syslinux.cfg for Unraid GUI mode, which I am currently using. tower-diagnostics-20200605-0905.zip