Jump to content

905jay

Members
  • Content Count

    34
  • Joined

  • Last visited

Community Reputation

0 Neutral

About 905jay

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. yes sir it is working as expected. Thanks for helping me fix this!
  2. @ken-ji does this look better to you?
  3. You're a beautiful man, Charlie Brown! I will look into doing this after "production hours" lol Thanks for your help on clarifying this for me @ken-ji
  4. Would assigning the info based on the interface MAC in pfsense make more sense and just set it all to automatic? That way any gateway issues should be eliminated, right? I don't have a firm understanding of the networking, it's not one of my strengths.
  5. thanks for all the great info guys I realy appreciate it. I'm hesitant to make further changes to the configuration as I host a bitwarden as well as a confluence instance that I rely on for frequent daily use. I also don't want to not listen to you folks who know what you're talking about. Is it just a matter of me simply leaving the gateway blank for the interfaces eth1 & eth2? Should I leave it blank for all three interfaces (eth0, eth1& eth2)? I'm not using VLANs or any tagging, just 3 distinct subnets for an intended purpose.
  6. @kana thanks very much for your help. I spent the entire day spinning my wheels and couldn't understand where things went wrong. You've saved my sanity. Should I run anything else to further clean it up? I have a 4 port NIC 10.15.81.xxx is my Main LAN 10.15.82.xxx is for IOT devices 10.15.83.xxx is for Guest & Family LAN I am running multiple piholes on unRAID for each of those interfaces (eth0 is Main, eth1 is IOT, eth2 is Guest) so I would like o keep all 3 interfaces connected as I presently have them. I run HomeAssistant which is using br1 (eth1 interface) and I also run Wireguard on my unRAID server and use it frequently. Thanks again for saving my ass, and my sanity. Just seeking your advice so I don't screw anything up
  7. The routes highlighted in Yellow, can't be deleted for some reason. I have docker and vm manager off, and went back into network settings and I can't get rid of it
  8. unraid-syslog-20200902-1913.zipI've got a similar issue unRAID and it's containers cannot access the internet. I can't ping out /nslookup /traceroute out. I was playing with some pfsense firewall settings last night and thought, perhaps I messed something up, somewhere. So I reverted to the known good working backup, but still no dice. unRAID server has internal connectivity. I can access everything on local IP:PORT piholes have been removed from the network to rule them out, pfsense is giving 8.8.8.8 /1.1.1.1 as the DNS servers I have VMs hosted on unraid that have internet access, and are assigned an IP from pfsense based on MAC address. I also have some containers hosted on unraid, that are given external SSL access (bitwarden /nextcloud /confluence /homeassistant) They can all be accessed internally with IP:PORT as well as https://service_name.com But I cannot access them from outside the network. I haven't changed anything on unRAID that I can think of beside enabling an interface on my 4 port NIC, to add to another AP for guest access, but I've reverted all those changes. MTU is set to 1500 per recommendation above I have restarted pfSense, unRAID, my AP's just to rule things out, and as mentioned, removed the piholes and am using 8.8.8.8 /1.1.1.1 as DNS for testing to see what the hell is wrong with my setup. Any help would be greatly appreciated. unraid-syslog-20200902-1913.zip
  9. Left the parity to runovernight, and it is at the exact same place it was at yesterday late afternoon 98.7% 3 Hours 50 minutes remaining for at least the last 12 hours. I've given up on the process now, i've aborted and will install the LSI card and see if that helps any any i5unraid-diagnostics-20190829-1236.zip unRAID
  10. @Frank1940 thanks for the follow up on this. Yes initially the server was rebooting every night for an unknown reason. That seems to have been fixed by not having docker and vm services running. The parity is taking forever to run, that is the current concern. due to the random freezing from before (unresolved) the parity was totally messed up. I see what you mean in terms of the read and writes to the disk in the screenshot, but i'm unable to isolate what is causing that, to those disks only. I thought the point of unRAID was to spread it out across all disks (read /write /data), not necessarily evenly, but better than this. Some people point to the fact that 6 disks are on the Intel controller, and 2 are on the Marvell controller on the motherboard. I am holding in my hand an LSI card that arrived today. Once the parity rebuild is complete, I intend to install this card and use it exclusively. Do you think this parity will ever rebuild? Or am I best to stop this now, install the card, and rebuild the parity again?
  11. I have no explanation for that other than it is how it was shown to me, and recommended that I setup this way. But it seems like there are 10 people, who all know what they are talking about, giving me 10 variations on doing everything under the sun. Typical internet stuff...everyone is an expert on everything behind the cloak of an avatar on a forum. So far a 6TB parity drive sync has taken approx 2 days to complete and still isn't close to being finished. At 11:30pm last night it shows 93%, 1 hour to complete (approx.) This morning it shows 93% and a day left to complete (the transfer rate went down from 100MB/s to about 5MB/s) from 8am this morning to now, it has moved 3% (give or take)
  12. mover runs hourly, however there is nothing running at this time in terms of services that I am aware of docker service is disabled, as is VM service
  13. Hey @trurl and @johnnie.black the parity was running for about 24 hours. I started it Monday morning and it went into Tuesday Morning. I decided to stop it yesterday morning and restart it because it was stuck at about 90% and showing 900+ days to complete. I figured something was wrong, and restarted it yesterday morning. At 11:30pm last night it shows 93%, 1 hour to complete (approx.) This morning it shows 93% and a day left to complete (the transfer rate went down from 100MB/s to about 5MB/s) Can anyone help me isolate what the issue is here? It's been stable otherwise, and hasn't crashed on me at all but that problem has lead to this problem. Diagnostics and syslog attached i5unraid-diagnostics-20190828-1309.zip unRAID(1)
  14. @johnnie.black @trurl thanks very much for your input I have the docker and VM service disabled again until the rebuild is complete and I implemented the change that @johnnie.black recommended I will report back here once that is complete
  15. @trurl would you be able to help me figure out if it is a container (which one, or which combination of containers) or if it is a VM that is causing all these issues that I'm seeing? I've replaced the memory on the server, (2x8GB + 2x4GB), and formatted the parity disk and am rebuilding the parity now as we speak. I feel that perhaps I've misconfigured something but i'm unsure of what it may be, and don't want to have to constantly have to go through this parity rebuild due to sudden freezing and power off situations. Is it possible that somewhere I have over-provisioned memory or CPU resources to a docker (or multiple dockers) ? Is there anything else that I can provide the community that may be able to point to where the shortcomings are with this system?