Jump to content

Amigaz

Members
  • Content Count

    12
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Amigaz

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Happy Pro license user here after switching from Hyper-V server 2016 about a year ago. Keep up the good work, guys! [emoji2]
  2. Nope, did’t roll back to 6.6.7 Forgot to mention that I am not running on a Ryzen platform, I’m using Intel platform with a Xeon E5-2697v3
  3. Same issues here with 6.7.0 and with 6.6.7 For me the vnc option to connect to the vm offers the most lagfree experience
  4. eth0 is already a port of the bond along with eth1,eth2 and eth3
  5. After adding a 4-port NIC into my system unraid seem to have decided to create a default route so that all traffic goes thru the built in NIC on the motherboard (eth4). I'd like to have the default route thru my bond .. is this possible? Can't seem to able delete or change anything in the settings (please see the image below)
  6. Have been using unraid for about a month now, started with version 6.6.7 and now i'm on version 6.7.0. I have a 4-port network card in my server along with two built in network interfaces on the motherboard so a total of six network interfaces ranging from eth0 to eth5 When I had started running unraid I teamed the four interfaces on my NIC cards to a static LAG (balance xor) and It has a fixed IP address, the two remaining network interfaces are not in a bond and was given fixed IP addresses Yesterday I looked in my router's traffic management interface and see that alot (maybe all?) of traffic from my unraid server seem to go thru one of the NIC's on the motherboard and not thru my NIC with the bond. I was gived the impression that unraid uses "eth0" for all traffic? I've looked at the unraid network settings and there's some odd looking route tables there that has been created by unraid., it almost looks like al traffic is routed thru eth5? Docker config. I'm quite novice when It comes to linux networking. I'd like all traffic (docker etc) to go thru the bond and eth4 and eth5 as management interfaces Hope someone can point me in the right direction, thanks
  7. Interesting, have exactly the same issue here with my 1tb MX500 cache drive
  8. Thanks, I’ll give it a go [emoji2] Skickat från min iPhone med Tapatalk
  9. Ok, sounds easy and simple. Do I need to redo the docker paths and port mappings?
  10. I need to move my docker.img to the cache drive for better docker performamce. Is there a quick ad easy way to do this? Thanks