JimPhreak Posted January 5, 2016 Share Posted January 5, 2016 I want to add a second vNIC to my unRAID VM in ESXi 5.5 and was wondering if there are any "gotchas" before I go ahead and do it and bond the adapters in unRAID. Last thing I want is to be unable to remotely access my server. Quote Link to comment
JimPhreak Posted January 5, 2016 Author Share Posted January 5, 2016 Hmmmmmm...I figured this would be a more common configuration for those using ESXi but maybe I'm wrong? Quote Link to comment
macester Posted January 6, 2016 Share Posted January 6, 2016 Hmmmmmm...I figured this would be a more common configuration for those using ESXi but maybe I'm wrong? Dont use esxi but why would you want to add a vnic then bond them in unraid? why not "hard" bond them in esxi for all your vm´s? vnics regardless of esxi, kvm etc.. will just use pure cpu to communicate. Quote Link to comment
JimPhreak Posted January 6, 2016 Author Share Posted January 6, 2016 Hmmmmmm...I figured this would be a more common configuration for those using ESXi but maybe I'm wrong? Dont use esxi but why would you want to add a vnic then bond them in unraid? why not "hard" bond them in esxi for all your vm´s? vnics regardless of esxi, kvm etc.. will just use pure cpu to communicate. I do have them hard bonded in ESXi. Just figured there might be some way for me to take advantage of having a second NIC inside my unRAID configuration to maybe get some added bandwidth. Quote Link to comment
macester Posted January 6, 2016 Share Posted January 6, 2016 Hmmmmmm...I figured this would be a more common configuration for those using ESXi but maybe I'm wrong? Dont use esxi but why would you want to add a vnic then bond them in unraid? why not "hard" bond them in esxi for all your vm´s? vnics regardless of esxi, kvm etc.. will just use pure cpu to communicate. I do have them hard bonded in ESXi. Just figured there might be some way for me to take advantage of having a second NIC inside my unRAID configuration to maybe get some added bandwidth. Communication like that will only go through the cpu so it dosent mather. You can install a speed test program in your vm´s and test inter-communication to see what i mean, a good program for benchmarking is "iperf". but in your case you would install iperf on a networked pc that is "bonded" and in unraid(or a vm in unraid) and you would moast allready see the speeds of the bond in esxi (i guess 2gbits). Quote Link to comment
JimPhreak Posted January 6, 2016 Author Share Posted January 6, 2016 Hmmmmmm...I figured this would be a more common configuration for those using ESXi but maybe I'm wrong? Dont use esxi but why would you want to add a vnic then bond them in unraid? why not "hard" bond them in esxi for all your vm´s? vnics regardless of esxi, kvm etc.. will just use pure cpu to communicate. I do have them hard bonded in ESXi. Just figured there might be some way for me to take advantage of having a second NIC inside my unRAID configuration to maybe get some added bandwidth. Communication like that will only go through the cpu so it dosent mather. You can install a speed test program in your vm´s and test inter-communication to see what i mean, a good program for benchmarking is "iperf". but in your case you would install iperf on a networked pc that is "bonded" and in unraid(or a vm in unraid) and you would moast allready see the speeds of the bond in esxi (i guess 2gbits). Ahhh so you're saying even though I only have one "NIC" in unRAID, since it's virtual (and using the vmx drivers which are capable of 10gig) that I'd still get max throughput based on the amount of physical bandwidth coming into my ESXi server. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.