cancel
Showing results for 
Search instead for 
Did you mean: 

Dual x670V Stacks, MLAG, and VMware ESX

Dual x670V Stacks, MLAG, and VMware ESX

Scott_Benne
New Contributor
Pro/Con design considerations for configuring (2) 2-node x670V stacks with emphasis on availability and performance.

Prior to submitting for budget approval, I was hoping to get feedback from anyone with experience configuring MLAG's with X670V-48t switches and VMware. I'm currently running out of available ports and instead of adding just one x670V to my stack, I was looking to possibly add a separate stack and configure MLAGs to our ESX hypervisors w/standard switches. Right now, I would have to shut down the entire server/storage footprint to update the EXOS software with our single 2-node stack. Would be grateful to hear comments "in favor of" or "in opposition to" the below configuration.

End result:
(2) 2-node x670V stacks
MLAG Stack A Port 1:1 with Stack B Port 1:1 for generic server traffic
MLAG Stack A Port 1:17 with Stack B Port 1:17 for NFS storage traffic
MLAG Stack A Port 1:33 with Stack B Port 1:33 for management traffic
Server environment is all VMware with 4 10G and 4 1G NICs per Host

Thanks!
8 REPLIES 8

Erik_Auerswald
Contributor II
Hi,

it is my impression as well that the VMware guys do not like using LAGs, either static or with LACP. This is opposed to the networking guys that want to use LAGs all the time. 😉

The VMware vSwitch is not a software Ethernet switch, it is something similar, but different. It uses a concept of uplinks that connect the vSwitch to the network. A frame entering one uplink is never sent to another uplink. It is sent to virtual ports only. Thus redundant uplinks work without grouping them into an LAG.

Fail over time in an LAG is usually determined by the time needed to detect a link down situation. LACP (with 30s hellos and 90s hold time) is not used as primary fail over mechanism.

Not using a LAG on VMware still uses link down detection as signal to fail over.

I would prefer the use of LAGs, but that is from the network point of view, not the VMware one.

Erik

Ty_Kolff
New Contributor II
No, I tested plugging one NIC into each x670 in a pair of MLAG/VRRP cores. It worked best when we just plugged in a NIC into each of the cores in a port with no MLAG or LAG configuration whatsoever.

Note we did not Team the NICs together. The VMWare guys I talked to didn't recommend teaming the NICs on the ESX host.

Paul_Russo
Extreme Employee
Hey Ty

To be clear you are saying that when the LAG is into one switch and you lose a link it is better than if it is across MLAG?

If so how would you handle the redundancy ?

Thanks
P

Ty_Kolff
New Contributor II
I recently did some testing with this scenario and we found that the ESX host worked better if it was just plugged into each of the x670s with no MLAG configuration whatsoever.
GTM-P2G8KFN