• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About 905jay

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. yes sir it is working as expected. Thanks for helping me fix this!
  2. You're a beautiful man, Charlie Brown! I will look into doing this after "production hours" lol Thanks for your help on clarifying this for me @ken-ji
  3. Would assigning the info based on the interface MAC in pfsense make more sense and just set it all to automatic? That way any gateway issues should be eliminated, right? I don't have a firm understanding of the networking, it's not one of my strengths.
  4. thanks for all the great info guys I realy appreciate it. I'm hesitant to make further changes to the configuration as I host a bitwarden as well as a confluence instance that I rely on for frequent daily use. I also don't want to not listen to you folks who know what you're talking about. Is it just a matter of me simply leaving the gateway blank for the interfaces eth1 & eth2? Should I leave it blank for all three interfaces (eth0, eth1& eth2)? I'm not using VLANs or any tagging, just 3 distinct subnets for an intended purpose.
  5. @kana thanks very much for your help. I spent the entire day spinning my wheels and couldn't understand where things went wrong. You've saved my sanity. Should I run anything else to further clean it up? I have a 4 port NIC 10.15.81.xxx is my Main LAN 10.15.82.xxx is for IOT devices 10.15.83.xxx is for Guest & Family LAN I am running multiple piholes on unRAID for each of those interfaces (eth0 is Main, eth1 is IOT, eth2 is Guest) so I would like o keep all 3 interfaces connected as I presently have them. I run HomeAssistant wh
  6. The routes highlighted in Yellow, can't be deleted for some reason. I have docker and vm manager off, and went back into network settings and I can't get rid of it
  7. unraid-syslog-20200902-1913.zipI've got a similar issue unRAID and it's containers cannot access the internet. I can't ping out /nslookup /traceroute out. I was playing with some pfsense firewall settings last night and thought, perhaps I messed something up, somewhere. So I reverted to the known good working backup, but still no dice. unRAID server has internal connectivity. I can access everything on local IP:PORT piholes have been removed from the network to rule them out, pfsense is giving / as the DNS servers I have
  8. Left the parity to runovernight, and it is at the exact same place it was at yesterday late afternoon 98.7% 3 Hours 50 minutes remaining for at least the last 12 hours. I've given up on the process now, i've aborted and will install the LSI card and see if that helps any any i5unraid-diagnostics-20190829-1236.zip unRAID
  9. @Frank1940 thanks for the follow up on this. Yes initially the server was rebooting every night for an unknown reason. That seems to have been fixed by not having docker and vm services running. The parity is taking forever to run, that is the current concern. due to the random freezing from before (unresolved) the parity was totally messed up. I see what you mean in terms of the read and writes to the disk in the screenshot, but i'm unable to isolate what is causing that, to those disks only. I thought the point of unRAID was to spread it out across all disks (
  10. I have no explanation for that other than it is how it was shown to me, and recommended that I setup this way. But it seems like there are 10 people, who all know what they are talking about, giving me 10 variations on doing everything under the sun. Typical internet stuff...everyone is an expert on everything behind the cloak of an avatar on a forum. So far a 6TB parity drive sync has taken approx 2 days to complete and still isn't close to being finished. At 11:30pm last night it shows 93%, 1 hour to complete (approx.) This morning it shows 93% and a day left
  11. mover runs hourly, however there is nothing running at this time in terms of services that I am aware of docker service is disabled, as is VM service
  12. Hey @trurl and @johnnie.black the parity was running for about 24 hours. I started it Monday morning and it went into Tuesday Morning. I decided to stop it yesterday morning and restart it because it was stuck at about 90% and showing 900+ days to complete. I figured something was wrong, and restarted it yesterday morning. At 11:30pm last night it shows 93%, 1 hour to complete (approx.) This morning it shows 93% and a day left to complete (the transfer rate went down from 100MB/s to about 5MB/s) Can anyone help me isolate what the issue is here? It's
  13. @johnnie.black @trurl thanks very much for your input I have the docker and VM service disabled again until the rebuild is complete and I implemented the change that @johnnie.black recommended I will report back here once that is complete
  14. @trurl would you be able to help me figure out if it is a container (which one, or which combination of containers) or if it is a VM that is causing all these issues that I'm seeing? I've replaced the memory on the server, (2x8GB + 2x4GB), and formatted the parity disk and am rebuilding the parity now as we speak. I feel that perhaps I've misconfigured something but i'm unsure of what it may be, and don't want to have to constantly have to go through this parity rebuild due to sudden freezing and power off situations. Is it possible that somewhere I have over-provi