monden2

Members
  • Posts

    6
  • Joined

  • Last visited

monden2's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hi all, I have installed a new pfSense router which leads to a switch, and set up the router to connect via AirVPN. The system work great, all my devices are getting a secured internet connection, except for my SABnzbd. I can download dockers, use indexers, deluge works fine, download plugins and everything, even my VM's have internet access, but as soon as I want to make an internet connection inside SABnzbd it fails. My 'Fix Common Problems' plugin also says that it cannot connect to Github, and that I should change my DNS setting to 8.8.8.8 and 8.8.4.4. I gave that a try in the Network Settings tab, but no luck. What I also found, is that I cannot ping 8.8.8.8 from behind my router (I can however nslookup google.com from behind the router, and ping 8.8.8.8 from the pfSense Webgui) so I am not sure whether my pfSense router is blocking something, or that UnRaid needs to be reconfigured somewhere (was working before on my cheapo router from my ISP). Unraid network settings: MAC address: D0:50:99:88:37:4F Enable bonding: Yes Bonding mode: Active-backup (1) Bonding members of bond0: eth0 Enable bridging: Yes Interface description: Network protocol: IPV4 only IPv4 address assignment: Automatic IPv4 address: 192.168.1.103/24 IPv4 default gateway: 192.168.1.1 / 210 optional metric (lowest is preferred) (pfSense router login is on 192.168.1.1) DNSv4 server assignment: Automatic IPv4 DNS server: 192.168.1.1 Desired MTU: Enable VLANs: No Help is very much appreciated! :-) --EDIT-- Small update, after some more diagnosing, I found that actually all dockers are getting a connection except SABnzbd. So I think it might a a SAB specific issues with pfSense. Anyone any ideas?
  2. Hi PWM, The idea is not to fully saturate both 10Ge NIC's (at the same time), this would be quick exhaustive on the system yes. Its more from a pricing perspective. The price of the additional 10Ge (@50-100USD) to make a direct link to my dedicated rig is usually lower than the price difference between a switch with only a 10Ge uplink, and a switch with both a 10Ge uplink and 10Ge output port.
  3. Hi Wayner, The idea is not to get a full 10G network. I just want my server to have a 10G uplink to a switch so that multiple gigabit devices can connect to it (e.g. For Media consumption) at the same time without any bandwidth issues. I was also thinking about running my future 10G main rig over the switch for my photo and video work. However, given some of the feedback it might seem a better idea to have 2x 10G NICs in my server, one for the 10G uplink to the switch which connects all my gigabit devices, and the second one for a direct 10G connection to my main rig. I'll also connect a UniFI AP Pro to my switch to ensure my WiFi performance is better than the crappy 10/100mbit wireless router I got from my ISP
  4. Hi Hoopster. Thank you for the input. As much as I would like to get a full UniFi system, as I want to have a 10GBe uplink from my server to the rest of the network, as well as a 10GBe connection with my main rig, I think I would only be able to get the US-48 500w switch, which would be roughly 800 USD. And then I would still have to buy the router. That's a but above my budget at the moment. Hence I think a pfSense router with a dual 10GBe ethernet or SFP+ would be much cheaper, which would allow me to get a suitable PoE network switch.
  5. Hi Jonathan, Thank you for the reply. The unifi ap's look good, I'll have a look at them. As for the 'additional' router, I meant to replace the crappy ISP 'wireless' modem/router with a pfSense one, and use the ISP's one just for WAN
  6. Hi all! I have seen many good things happening on this forum, so I am hoping I will be able to get some feedback. Firstly, the setup I am currently running. I have a dedicated system running my Unraid OS with a Windows 10 VM. The main reason for the setup compared to a dedicated PC and separate NAS was experimentation and practice with using dedicated storage server and media server in one. Firstly, the server is meant to be used as a storage location for all me media. Secondly, the server is meant as a media server using dockers like PLEX (with SABnzbd, Radarr and Sonarr powering it). The Windows 10 VM on the server is used as my 'daily driver' for photo/video editing with a GPU passthrough for rendering and gaming. Though the system works fine, certain issues with specifically USB (many gaming peripherals connected to the MB USB controllers) made me decide to ditch the VM part as a daily driver, and go back to a dedicated PC with recycling some of the RAM and GPU I currently have. This will free up the server to fully work for storage and media. My current systems looks like this: Motherboard: AsRock X99 Xtreme4 CPU: Intel 5820k RAM: 32GB G-Skill DDR4-2400 GPU: ASUS Strix GTX 1080 8GB SSD: Crucial M500 240GB HDD: 2x WD Red 4TB, 1x WD Blue 2TB, 1x Seagate 'something' 4TB (Parity) PSU: Corsair RM850 For the future, I am planning the following things for my server: 1. Reduce RAM to 16GB and move to my dedicated rig 2. Move the GPU to my dedicated rig 3. Move the PSU to my dedicated rig and replace the server one with a 450 watt PSU) 4. Upgrade with 10Gbe PCI-E card (not sure yet to go with RJ45 or SPF+ yet) 5. Expand the SSD pool with another 240GB drive, and run the setup in RAID0 (current transfer speeds via my VM and Wifi max out at a 2/3 MBps) 6. Find a decent WiFi router to provide me better speeds than the crappy one I currently have from my ISP My future plans for the server will be to store all my media, and run as a media server (hence I want to keep the 5820k in there). I also want to use the server on the side to run game servers (e.g. Minecraft) as well. In addition, I want to install a 1000/10000 Mbps switch to ensure that I could wire up all my regular gigabit devices (laptops, TV's, printers etc.), and my dedicated rig could get a 10Gb connection to my server whenever I need to do my photo/video uploads/downloads. Also, I don't really want to spend thousands of Euro's doing so. So my questions would be the following: 1. Would you go with RJ45 or SFP+ for the 10Gbe connection between my server/rig and the switch, and why? (I see X540-T2 cards for about EUR 100, and Mellanox SFP+ Cards for about EUR 50-75) 2. Which switch would be recommended for the proposed solution? (e.g. I am highly interested in going with the ASUS XG-U2008 due to the relative low costs of EUR 250 for 2x 10Gbe and 8x 1Gb RJ45 ports) 3. Which WiFi solution would you go with to get the best WiFi network and internet speeds? (e.g. hard wire multiple AP's to a router, or wire one high performance router to the router with potential range extenders) 4. Would it make sense to also build an additional router using pfSense to optimise network throughput? Please feel free to let me know if you need any further information.