• Posts

  • Joined

  • Last visited

About johnsanc

  • Birthday 03/14/1984


  • Gender
  • Location
    Charlotte, NC

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

johnsanc's Achievements


Explorer (4/14)



  1. @JorgeB - Thanks so much for that link. I've read that post but after re-reading it carefully and checking I see now that my 9207-8i + RES2SV240 is actually capable of supporting 16 drives at 275 MB/s For some reason I had these diagrams and their associated speeds stuck in my head which are the LSI 2008 chipset: So that being said it looks like some potential options are: 12 drives Change my current setup to single link for my 12 drives in my main enclosure using my existing 9207-8i + RES2SV240 Route a SAS cable to my other enclosure Add another RES2SV240 for another 12 drives using single link This should maybe result in a slight bottleneck, but still very acceptable speeds without needing an extra HBA 16 drives Add a 9207-8e + RES2SV240 in dual link for 275 MB/s for the additional 16 drives 20 drives Add a 9207-8e + RES2SV240 in dual link for 275 MB/s for the 16 drives Route a cable from the internal expander out to the new enclosure for the additional 4 drives (since I'm only using 12 currently with a dual link setup) So it sounds like just getting a new HBA / Expander pair would be a good way to go if I didn't want to sacrifice any potential speed from my current setup.
  2. Yes I should have clarified - I am only interested in the HBA / Expander. My current setup is an Antec 1200 with 4x 3x5 drive cages and it works beautifully. It wasn't the most cost effective storage solution for a lot of drives but it was able to grow with my array over the years. And frankly it just looks way better than a rack IMO. My goal is to create another matching tower just for the extra drives using the same drive cages I use for my main enclosure. The problem is that I only have 1 PCE 4.0 x16 slot available to use. So here's what I've deduced so far: 12 drives = 9207-8e + RES2SV240 (dual link, basically mirroring what I have now internally. Single link may also work with very little bottleneck) 16 drives = not sure 20 drives = not sure
  3. I know there's a lot of posts about choosing HBAs and expanders, so maybe this will help others as well that are googling trying to figure this out... What is the current cheapest way to add support for an additional _____ drives with zero bottlenecks? 20 HDDs? 16 HDDs? 12 HDDs? Assumptions: One PCI-E 4.0 x16 slot available Almost useless two PCIe 2.0 x1 slots available The extra drives will be housed in a separate spare tower case Only HHDs will be connected to the HBA(s) / Expander(s) - SSDs will be direct to motherboard SATA Background: My main box right now holds 20 drives total and I have no space left (16 data, 2 parity, 2 cache) I want to support the max that unRAID is capable of (28 data, 2 parity, at least 2 cache, a couple spare slots for unassigned devices) I currently use my onboard SATA (8 drives) along with an 9207-8i + RES2SV240NC in dual link (12 drives) The motherboard I am using is an ASRock x570 Creator and the 9207-8i is currently in a PCI-E 4.0 x16 slot The minimum I would need to be able to support is 12 additional drives, but ideally I would like to be able to support 16 or even 20 if its not cost prohibitive Any recommendations are appreciated.
  4. Hmmm... I'm going to have to dig deeper into my dockers. When the Docker service is disabled, I can clear the ARP table and I get the correct MAC address. If I I start the docker service, clear the ARP table again, then I get the weird random MAC address. EDIT: Ok the MAC address I am seeing in pfSense when I turn on the Docker service is the one from "shim-br0". I assume this is because I have "Host access to custom networks" enabled in my Docker settings. I believe I only needed this for Pi-Hole, but now I replaced that with pfBlockerNG. Once I disabled the custom networks I saw my correct MAC address in pfsense.
  5. I recently added a pfSense box to my network and assigned some static IP addresses. I see in my pfSense logs a ton of entries like this: arp: xx:xx:xx:xx:xx:xx attempts to modify permanent entry for 192.168.y.yyy on igb1 It looks like something is changing my Unraid MAC address to the address represented by xxx's above. I thought maybe it was my pihole docker, but I deleted that since I no longer need it. Upon each reboot I get a different random and unique MAC address for Unraid. Any ideas what could be causing this?
  6. Forgot to followup but I got it working. I turned my old router to an AP and setup a pfSense box. I forgot to add the static route in pfSense. Once I added it things worked perfectly.
  7. I tried adding my router IP and to the Peer DNS Server and it did not allow me to access anything aside from my LAN when using "Remote Tunneled Access". Any idea what the issue could be? EDIT: Apparently if I use NAT then I can access the internet using Remote Tunneled Access. Is there a way to make that work without the NAT setting set to "Yes" ?
  8. I recently upgraded my parity drives to 12TB. For now, my largest data drives are 10TB. I noticed that during a parity check, the check progresses past the 10TB mark and processes the full 12TB even through there is no data to check against. Why does this happen? Wouldn't it be more efficient to stop after the last bit of data from the data drives? No support needed, just a general question I was pondering.
  9. Just following up to confirm that upgrading to 6.9-beta1 seemed to have fixed the issue. Thank you all for your help and guidance as always.
  10. I am really struggling with this one and must have read though this entire thread 3 times now. Here is what I have so far: Local server uses NAT: No Local endpoint: my external IP : 51820 Peer type of access: Remote tunneled access All local tunnel/peer settings are defaults My docker config is set to allow host access to custom networks The docker IPv4 custom network I have uses the same subnet I forwarded port 51820 to my unraid server internal IP I added a static route in my router: Destination IP: IP Subnet Mask: Gateway IP: unraid internal ip address Metric: 2 (No idea what this is for and Netgear's help is not helpful - supposedly this is supposed to be the number of routers on the network?) Now, when I try to ping with the command line it works: PING ( 56 data bytes 64 bytes from icmp_seq=0 ttl=64 time=1.303 ms 64 bytes from icmp_seq=1 ttl=64 time=2.949 ms 64 bytes from icmp_seq=2 ttl=64 time=2.096 ms 64 bytes from icmp_seq=3 ttl=64 time=2.886 ms 64 bytes from icmp_seq=4 ttl=64 time=3.213 ms 64 bytes from icmp_seq=5 ttl=64 time=2.095 ms When I try to ping I get "Destination Host Unreachable" errors but I can also see that the errors show that the Redirect Host is going to my unraid server IP. I tried connecting with both my iPhone and the macOS WireGuard app and both show the 5 second timeout handshake error. Anyone have any suggestions? I feel like I have to be missing something obvious. EDIT: I completely forgot about my piece of hot garbage AT&T Pace gateway for my fiber connection. Since AT&T's firmware update broke DMZ+ mode a year ago (still not fixed) I had most ports opened to my Netgear router... but the range ended at 50999 since AT&T has a few service ports reserved above that. I changed my Wiregaurd port to something in range of what I forwarded and it worked without a hitch. However, how do I access both my LAN and the internet at the same time on the VPN? Do I need to select a different "peer type of access"? EDIT2: - Remote tunneled access = LAN access + no interwebs on device I'm using to VPN in - Remote access to LAN = LAN access + interwebs
  11. Yep I'm going to kick off another parity check to make sure there's zero errors. Its not an Unraid problem per-se, but doesn't the behavior above indicate that Unraid does not re-read the sync correction it just made to ensure its valid? If not it would be nice to have a "Parity Check with Validation" option.
  12. Another quick update: My parity check started firing off corrections at about the 9.25 TB mark which is right about where I started getting the IO_PAGE_FAULT error the other day during my parity check. So, after this ordeal I am left with a couple of takeaways: Its possible for Unraid to write bad parity and there is nothing in the Web UI that would indicate anything went wrong unless you look at the syslog. The "bad parity writing" issue starts with the lovely AMD IO_PAGE_FAULT error. In my case there were a few XFS errors after this and my log was not flooded... but the parity was indeed incorrect for every sector after that point. So, although I think I have recovered from this, its a bit concerning that this is apparently a scenario that can write bad parity without the user knowing. This could leave someone with a completely unprotected array and they would not even know it until their next parity check.
  13. I do have a SAS+Expander as well. LSI LSI00301 (9207-8i) + Intel RES2SV240NC Interesting about the memory - It is ECC and straight from my motherboard's QVL for RAM. Since I just upgraded to v6.9beta-1 I will let this parity check complete and monitor the logs for any more similar errors before I attempt to change any other settings.
  14. How do you know if an xfs check is "good" (with or without -n)? I don't see any kind of exit code in the syslog. ive attached the output of the xfs checks for the two disks using -vv (without the -n) xfs_check.txt