herringms

Members
  • Posts

    8
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

herringms's Achievements

Noob

Noob (1/14)

1

Reputation

  1. I spent some time on my flight today outlining what I think might work. Unfortunately, I feel like without a firewall in front of the PfSense, you still have a security risk in the event of a configuration clear as the unraid host could become available through APIPA. I wouldn't be too worried, but the conditions here are that we may trust the infrastructure and topology, but not the occupants of the network. ideally, you want something that makes the host inaccessible in the event the configuration is cleared, so I'm suggesting you must have a layer 2 vlan that's a single step away from PfSense, between the ISP modem and the PfSense host, to make sure the unraid host does not become addressable. I have to configs, one as three routers in a y config, and one is a single router. The single router config I'm less sure of without testing, but would look something like the following: Physical topology ISP Gateway -> [optional: additional layer2 device on VLAN 10 ->] PfSense box eth0[.10] (passthrough) PfSense box eth0.20 & eth1.30 (bridge) -> (LAN)Switch Virtual Connections: Interfaces PfSense outside (eth0) (Statically assigned, no upstream DHCP) OR PfSense outside (eth0.10) (DHCP) PfSense outside (eth0[.10]) -> eth1.20 PfSense outside (eth0[.10]) -> eth1.30 Unraid Config VLANs 10 20 30 Passthrough eth0 Bridge eth1 Inside network is safe if config is cleared because PfSense won't route Unraid becomes exposed if DHCP is served from modem and no firewall, or if there is no Required VLAN for eth0 to get out Putting a hardware device that can serve a VLAN in front of the PfSense seems the only safe way to protect this from a configuration wipe. I'm looking into a $50 TPLink that may be a good candidate Pfsense config (Statically set, no upstream DHCP) outside: public address info inside: 172.18.11.1/24 firewall, block between the two, block to unraid where desired Pfsense config (DHCP) [Additional layer2 device with VLAN ports required] VLAN 10 configured outside: eth0.10 with DHCP (LAN) inside: 172.18.11.1/24 opt int 2 config inside: 172.18.20.1/24 opt int 3 config inside: 172.18.30.1/24 If this is looking good to others, I'll test it when I get home in a week, and write up a guide for PFSense as a border router on a 2 NIC Unraid host
  2. All great points! I needed to think about them for a while, but don't think I can really solve for those problems. I'm going to amend the disclaimer as this is not secure, but rather a proof of concept. I can add a note about the 172 addresses, but in my model I stepped up to 172.17/24s to avoid most conflicts. I was more concerned about conflicts with SD WANs on 10/8 (This is one of the places I expect to use this config), and residential devices on 192/24s. I believe avoiding difficult to troubleshoot routing issues is going to be much more important that confusion over subnets in the 172 range, as this has to be a technical and complex process by nature. For the second, you're right eth0 is the way to go. I'm not using this as an actual edge device and probably wouldn't recommend it I don't list steps in the guide to lock down Unraid from the parent network, and don't think I'd want to. BUT I should mention that. For me, currently, it's about portability of my containers, VMs, and Unraid in general. it was also about recovering a few devices without wiping them that had hard set VLANs and SSIDs. We're inherently needing to put a ton of trust in the upstream as it is. I'm reminded of Steve Gibson's y configuration for routers (https://pcper.com/2016/08/steve-gibsons-three-router-solution-to-iot-insecurity/), and am wondering if it would be worth 3 virtual routers with eth1 and eth0 being available selectively to each router would be a good solution. However, I feel like if you're looking for your PfSense to be the primary router. I'd need to really plan that out.
  3. Greetings all, I saw some previous posts about this and users having difficulties getting this set up. ex: https://forums.unraid.net/topic/115277-how-to-setup-pfsense-vmware-on-unraid-os-with-2-nic-ports I'll start by saying I'm not a Linux guy, but have some basic networking and troubleshooting experience. This is a pretty advanced process no matter who you are, and I doubt many people will get through it without a mistake or two. I'm open to recommendations and corrections, and I may be able to help with some mild questions, especially asking for clarification, but will probably ignore anything too complicated. Disclaimer: Do not attempt this if you do not understand the networking concepts being used. Disclaimer 2: The firewall rules are just what I feel comfortable with and not a recommendation. Think of them as sample code. You should plan, implement and test your own rules to make sure they meet your expectations and requirements. Border Router Disclaimer: If you want PfSense to be your border router, this configuration is insecure. We're inherently putting a lot of trust in the upstream. This guide also allows Unraid to be available from the parent network. I may work on a future guide that offers a more secure configuration for border routers, but as is, this configuration should not be considered secure. It is, perfectly possible to be made secure, My guide would just focus on making the entire "secure border router" configuration more resilient. The challenge: With only two NICs, you need to keep one of them bridged and virtually available so you don't risk taking the entire OS offline and going into some form of recovery. There is not a clear consensus on the best, or even a working method for doing this with a FreeBSD custom kernel like PfSense. My specific case: I'm transitorily homeless and wanted to be able to configure my own VLANs, networks, wireless broadcasts firewall rules, DHCP with DNS options, Pi-hole server, etc. behind my parents, very limited, AT&T gateway. Edge Routers are unavailable and many alternatives don't get the jobs done or are too expensive. I've confirmed this is working exactly as I want it to for provided different DHCP to Docker, Guest, IOT, Main and Parent networks without internetwork connectivity except where I want it. This works via hardwire, direct on Docker/VM assignment, and from the wireless broadcast on my Unifi APs (make sure to configure VLANs within the controller if you use these). General Purpose: Your server only has two NICs, and you'd like a virtualized router to handle VLANs, firewall rules, routes. Provides internet from within another network by masquerade, and the PfSense router can receive a DHCP address on the outside. Conventions used and terms: Parent Network: This is the immediate upstream network. In the case of this configuration it is a semi trusted residential network. VLAN_MAIN : ID - 11 VLAN_DOCKERSANDVMS : ID - 12 VLAN_GUEST : ID - 76 VLAN_IOT : ID - 107 Address space for my VLANs: several subnets in the 172.17.0.0/16 range. Please note that Unraid by default uses some 172.16.0.0/12 addresses and there could be potential for a subnet conflict if you aren't careful All_VLANs: and interface group containing each VLAN. VLANs are configured in the Unraid host, not on PfSense Firewall Rules documentation syntax - Description: "A Description": Int "interfaceOrGroupName" sounce network "network Name" dest address "address name" -port portNumber -allowOrDeny Guide: Setting up an Unraid device with only 2 NICs to be PfSense capable Configure Unraid, configuring the NIC: 1) Disable VMs and Dockers temporarily via settings within Unraid. 2) Remove Bonds and bridges from the NIC you would like to passthrough and And "Bind it to VFIO at boot" via the tools -> device settings menu(see pic) 3) Reboot, then configure your remaining NIC from "Settings -> Network Settings". Name it and enable bridging 4) Set a Static IP or DHCP based on your preference. This is your Unraid server's address on the parent network. 5) Create preferred VLANs. Only assign an IP if you want your host accessible on that network Configure Unraid, configuring the PfSense VM: I'm going to keep this section abbreviated, focusing on making your NICs and HDD available through the VM config 6) Create a PfSense VM with your preferred resources a. Your vDisk BUS needs to be set to SATA b. Your LAN NIC should be br0 and model e1000 c. Add a Mac address, source bridge, and model of e1000 for each VLAN and the LAN (LAN will be for managing PfSense) d. Add your WAN NIC that you enabled passthrough in by clicking it under "Other PCI devices" and create the VM 7) Install PfSense and configure it to your preferences. Configure the PfSense to make the GUI available: I'm going to keep this section abbreviated, focusing only on the necessary steps to get to the GUI After the install completes, open the Unraid VM's console and run through the wizard. If you messed up, or were unsure, follow these steps when you get back to the menu. 9) Press 1 to "Assign interfaces". You can verify the Mac address in the VM config within Unraid (you will be able to name them later) (see pic) a. WAN is the pass-through interface. It should not be virtualized. b. LAN will be one of the em interfaces. c. You can assign the remaining em interfaces to opts (These will be your VLANs), or wait til you are in the GUI. 10) Press 2 to set the IP address. You only need to set the WAN, and LAN network for now a. Assign the WAN (pass-through) interface to receive DHCP b. Assign the LAN interface a static private IP, with no DHCP server, on it's own subnet. c. The LAN interface is where the management GUI for PfSense will be available and you should receive a confirmation from the console it is now available. 11) Give your computer or management device a second IP on the LAN's subnet and navigate to the IP you assigned within the browser. 12) Go through the basic config in the web GUI, you can skip most steps, we'll do them manually. Completing the PfSense configuration in the GUI: 13) Go to "Interface -> Assignments" and verify all interfaces (WAN, LAN, vLANs) are created and assigned correctly. Do not create VLANs on PfSense. 14) You can rename them by clicking on them. This is highly recommended for your sanity and should be consistent with your descriptions in Unraid 15) Under "Interface Groups" add a group for all of your VLAN interfaces to simplify firewall rules later 16) Go to "Services -> DHCP Server" and add your range, DNS server and Gateway IP, and lease time. Add a domain if needed. 17) Go to System -> Routing, and verify the WAN gateway is configured correctly (See pic, mine is receiving DHCP) 18) The default gateway should be that of the IP of the gateway getting your Unraid OS (parent network) to the internet. 19) Verify your PfSense VM has internet by pinging 8.8.8.8 from the "PfSense console" within the "Unraid VMs console". Configure the firewall: After you've confirmed access to the internet, only the LAN interface (used for management) will have access to the internet until you create additional firewall rules 20) Plan your firewall rules so you don't take down your network. All rules can be created on the ALL_VLANs group for a base setup. As your rules will differ from mine, here are some important notes: All networks other than the one declared LAN do not have an inside to outside rule You'll need rules to restrict communication between each VLAN via layer 3 You'll need rules to prevent VLANs from accessing the parent network but still allow them to get out. With my four VLANs (Main, DockersAndVMs, Guest, IOT) it took me 11 firewall rules. Depending on your use case, this may vary. My firewall rules: 1) Description: "Default allow LANs to internet": Int "All_VLANs" sounce network "172.17.0.0/16" dest address "WAN Address" -allow * Note, Since you can't target interface groups in source, I specified all of the 172 space, where my VLANs exist. You could also specify the entire 172 range 172.16.0.0/12 2.1) Optional: Description: "Allow access PfSense management from main": Int "All_VLANs" sounce network "VLAN_MAIN" dest address "VLAN_MAIN address" -port 22,80,443 -allow 2.2) Optional: Description: "Block access PfSense management from VLAN": Int "All_VLANs" sounce network "VLAN_DOCKERSANDVMS" dest address "VLAN_DOCKERSANDVMS address" -port 22,80,443 -block 2.3) Optional: Description: "Block access PfSense management from VLAN": Int "All_VLANs" sounce network "VLAN_GUEST" dest address "VLAN_GUEST address" -port 22,80,443 -block 2.4) Optional: Description: "Block access PfSense management from VLAN": Int "All_VLANs" sounce network "VLAN_IOT" dest address "VLAN_IOT address" -port 22,80,443 -block 3.1) Description: "Allow VLAN 11 to talk to itself": Int "All_VLANs" sounce network "VLAN_MAIN" dest network "VLAN_MAIN" -allow 3.2) Description: "Allow VLAN 12 to talk to itself": Int "All_VLANs" sounce network "VLAN_DOCKERSANDVMS" dest network "VLAN_DOCKERSANDVMS" -allow 3.3) Description: "Allow VLAN 76 to talk to itself": Int "All_VLANs" sounce network "VLAN_GUEST" dest network "VLAN_GUEST" -allow 3.4) Description: "Allow VLAN 107 to talk to it's gateway": Int "All_VLANs" sounce network "VLAN_IOT" dest network "VLAN_IOT" -allow 4.1) Optional: Description: "Block access to printers or network appliances from a VLAN": Int "All_VLANs" sounce network "VLAN_IOT" dest address "NetworkApplianceIPsGroup" -block 4.2) Optional: Description: "Make printers or network appliances available": Int "All_VLANs" sounce network "172.17.0.0/16" dest address "NetworkApplianceIPsGroup" -allow 5.1) Description: "Block VLANs to VLAN 11": Int "All_VLANs" sounce network "172.17.0.0/16" dest network "VLAN_Main" -block 5.2) Description: "Block VLANs to VLAN 12": Int "All_VLANs" sounce network "172.17.0.0/16" dest network "VLAN_DOCKERSANDVMS" -block 5.3) Description: "Block VLANs to VLAN 76": Int "All_VLANs" sounce network "172.17.0.0/16" dest network "VLAN_Guest" -block 5.4) Description: "Block VLANs to VLAN 107": Int "All_VLANs" sounce network "172.17.0.0/16" dest network "VLAN_IOT" -block 6) Description: "Default allow LAN to any other rule": Int "All_VLANs" sounce network "172.17.0.0/16" dest address "any" -allow Finally, change your PfSense password. You should develop a habit of doing this and documenting it in a password manager when you initially set it up, but do it now if you forgot. *crosses fingers* Here's hoping that formatting isn't a disaster
  4. While tee-tee jorge answered my questions (Thanks again), I've come across an option that would have helped in reducing parity rebuild times. This is for anyone viewing this later looking for a way to reduce that time I have had reason to rebuild my parity several more times since the drives came in. This was primarily due to testing performance and isolation of different methods of sharing data (direct disk access, disk shares, user shares, raw disk pass through using vbox commands, etc.). Using the feature below has reduced parity build to around 4-5 hours. Edit (around 160MBps, which is about as close to theoretical max as I could hope.) In Disk Settings, switch Tunable (md_write_method) to reconstruct write. As my configuration only has a few disks, this is a complete win. link: https://wiki.unraid.net/Tips_and_Tweaks#:~:text=Turn on Reconstruct Write,-(Highly recommended!&text=A new mode has been,the read then the write). This page was last modified on 4 September 2016 Turn on Reconstruct Write (Highly recommended! Often called 'Turbo Write' ) Problem: Writing to parity protected data drives has always been slow, because it requires reading the old data and parity blocks, calculating the new parity, wait for platter rotation to bring the block location back around, then finally writing the data and parity blocks. This is known as 'read-modify-write' (RMW for short). A new mode has been added to unRAID called 'reconstruct write', where the data is immediately written, all of the other data disks are read, and parity is calculated then written. There is no wait for platter rotation! And no double I/O to each drive (first the read then the write). Rather than modifying parity as before, it's building it fresh. Discussion: There's an upside and a downside to this change. The upside is it provides a huge performance boost to writes directly to array drives! Users often see speeds between 2 and 3 times as fast! (That's why it's sometimes referred to as 'Turbo Write'!) The downside is that ALL of the array drives must be spun up for EVERY write! So it's a trade-off between write speed and drives staying spun down. Suggested fix: go to Settings then Disk Settings, and change Tunable (md_write_method) to reconstruct write Note: the tunable option Auto currently just sets reconstruct write to off Tip status: highly recommended! it's a fairly new feature, but so far, no reports of problems work in progress
  5. Appreciate the clarification, that makes sense. A warning or tooltip suggesting that would have been nice. It's good to know that's the only problem. I am slightly concerned that after the second copy, I got a green icon on the [new] 3tb parity drive, with the [old] 1tb parity drive in disk 2 saying it's contents were emulated, which is what gave me the confidence to switch assignments without booting up, but that's probably a symptom of having done the procedure twice without starting the array and I doubt many people would experience the same set of conditions that led to that.
  6. Hey, I really appreciate your time and looking into this. I also appreciate the tips. I rethought the problem and realized I could drop disk 2 from the array, let it rebuild, and then drop the parity disk and let parity rebuild on the 3TB. This worked and only took about 6 hours. My other drives should be here soon to replace the again failing disk 2. To answer the other questions and satisfy curiosity, Both disks have had the array rebuilt multiple times. There have been several distinct smart errors and notifications for disk 1 and 2, but not for the parity drive. I had a partner remoting into one of the vms confirming the poor performance when I was seeing the notifications in the Unraid UI. (90-100% CPU utilization within a CPU isolated VM, that seems almost certainly related to a bad sector saturating the processor cache for a couple of hours. Unraid reported that core at 12% while the VM reported 90-100). They're hot because they've been writing for days, but more-so because I live in Texas and my AC is out. The house gets to be around 85+, and even moving air over them with ceiling, tower and filter fans going constantly over them it doesn't do a lot (warranties take forever to fulfill).
  7. Edit: thank you for your time Initial configuration here: This would be nice But: So this is necessary: However, after it's complete and everything says it's valid, If I attempt to change the disk configuration in any way, it goes back to the original configuration. Rebooting the system also reverts to the original configuration. I can send you screenshots 10 hours from now showing this, but the goal was to avoid another 10 hour copy for 80GBs of data. To answer the other questions. Yes, there are VMs I would like to keep. They are on Disk 1. Disk 1 has had several smart errors as well. unraidmc-diagnostics-20200601-1906.zip
  8. Edit: Thanks to everyone, Edit2: tldr; disks are failing, parity swap procedure is losing configuration data before array is started. Parity rebuild takes 9 or more hours. My last post in this thread explains how I reduced parity rebuild times. I've been having drives intermittently fail/recover (Smart errors) in my unraid trial for aver a week now. I ordered some 3TB drives to replace them, but they are set to come in at different times. Saturday after I received the first, but the others won't arrive until Tuesday. No big deal. I went to preclear the first drive and received errors that it is missing an unraid MBR. I'm not sure what that's about or why it would be a requirement to test and wipe a drive, but I moved on. After setting the drive to replace the currently emulated drive, I receive and error that it is too large and I need to make it my parity drive. I look up the partiy swap, and it's a bit annoying because I'll have to be offline for a while, but it's only 80 GBs, so I didn't think it would be too bad. It took 10 hours to copy the parity data. Okay, so clearly it's doing some sort of imaging. I switched my drives back to the original configuration at this point real quick to make sure the old one still shows as failing before attempting to salvage it with a pre-clear. I shows as disabled and emulated, but not failing. When I switch back to the new configuration with my 3TB drive set as parity, it now displays that it is a new disk. I decide to take advantage of the fact that I can preclear my new drive and call the 10 hours lost time. The Pre clear finishes Monday Morning. I begin the parity copy agaid. 9 hours later, it completes. The Parity drive I copied from says disabled emulated, and my 3TB says it's a valid parity disk now. I switch the old, now emulated parity disk in the UI, so I can try to preclear it, and immediately I get an error that the array is not valid. I put it back where it was, and both parity disks say new and it wants me to copy again. Does anyone have any ideas why this process has been so painful or what could be going on? I've had my servers and VMs down for going on 3 days now, just waiting on copies and disk maintenance. Is parity swap on unraid usually this unreliable? I've got 10 days before purchasing, and I'm really leaning towards no, after this experience.