Adding second vNIC


Recommended Posts

Hmmmmmm...I figured this would be a more common configuration for those using ESXi but maybe I'm wrong?

 

Dont use esxi but why would you want to add a vnic then bond them in unraid? why not "hard" bond them in esxi for all your vm´s? vnics regardless of esxi, kvm etc.. will just use pure cpu to communicate.

Link to comment

Hmmmmmm...I figured this would be a more common configuration for those using ESXi but maybe I'm wrong?

 

Dont use esxi but why would you want to add a vnic then bond them in unraid? why not "hard" bond them in esxi for all your vm´s? vnics regardless of esxi, kvm etc.. will just use pure cpu to communicate.

 

I do have them hard bonded in ESXi.  Just figured there might be some way for me to take advantage of having a second NIC inside my unRAID configuration to maybe get some added bandwidth.

Link to comment

Hmmmmmm...I figured this would be a more common configuration for those using ESXi but maybe I'm wrong?

 

Dont use esxi but why would you want to add a vnic then bond them in unraid? why not "hard" bond them in esxi for all your vm´s? vnics regardless of esxi, kvm etc.. will just use pure cpu to communicate.

 

I do have them hard bonded in ESXi.  Just figured there might be some way for me to take advantage of having a second NIC inside my unRAID configuration to maybe get some added bandwidth.

 

Communication like that will only go through the cpu so it dosent mather.

 

You can install a speed test program in your vm´s and test inter-communication to see what i mean, a good program for benchmarking is "iperf".

 

but in your case you would install iperf on a networked pc that is "bonded" and in unraid(or a vm in unraid) and you would moast allready see the speeds of the bond in esxi (i guess 2gbits).

Link to comment

Hmmmmmm...I figured this would be a more common configuration for those using ESXi but maybe I'm wrong?

 

Dont use esxi but why would you want to add a vnic then bond them in unraid? why not "hard" bond them in esxi for all your vm´s? vnics regardless of esxi, kvm etc.. will just use pure cpu to communicate.

 

I do have them hard bonded in ESXi.  Just figured there might be some way for me to take advantage of having a second NIC inside my unRAID configuration to maybe get some added bandwidth.

 

Communication like that will only go through the cpu so it dosent mather.

 

You can install a speed test program in your vm´s and test inter-communication to see what i mean, a good program for benchmarking is "iperf".

 

but in your case you would install iperf on a networked pc that is "bonded" and in unraid(or a vm in unraid) and you would moast allready see the speeds of the bond in esxi (i guess 2gbits).

 

Ahhh so you're saying even though I only have one "NIC" in unRAID, since it's virtual (and using the vmx drivers which are capable of 10gig) that I'd still get max throughput based on the amount of physical bandwidth coming into my ESXi server.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.