Showing results for 
Search instead for 
Did you mean: 

Dual x670V Stacks, MLAG, and VMware ESX

Dual x670V Stacks, MLAG, and VMware ESX

New Contributor
Pro/Con design considerations for configuring (2) 2-node x670V stacks with emphasis on availability and performance.

Prior to submitting for budget approval, I was hoping to get feedback from anyone with experience configuring MLAG's with X670V-48t switches and VMware. I'm currently running out of available ports and instead of adding just one x670V to my stack, I was looking to possibly add a separate stack and configure MLAGs to our ESX hypervisors w/standard switches. Right now, I would have to shut down the entire server/storage footprint to update the EXOS software with our single 2-node stack. Would be grateful to hear comments "in favor of" or "in opposition to" the below configuration.

End result:
(2) 2-node x670V stacks
MLAG Stack A Port 1:1 with Stack B Port 1:1 for generic server traffic
MLAG Stack A Port 1:17 with Stack B Port 1:17 for NFS storage traffic
MLAG Stack A Port 1:33 with Stack B Port 1:33 for management traffic
Server environment is all VMware with 4 10G and 4 1G NICs per Host


New Contributor
Thanks Paul and Erik for the quick response! In regards to NFS storage and static LAGs, in the event an active flow/LAG member would go down, does the vSwitch recover gracefully to the other LAG member or is there a chance of data corruption?

Data corruption because of network problems should be prevented by NFS, modulo bugs in the implementations.

In my experience, NFS is quite robust. My experience in this regard pertains primarily to classical UNIX and GNU/Linux implementations, as opposed to VMware and storage vendors.

Contributor II
Hi Scott,

please note that the ESXi Standard vSwitch cannot use LACP, thus you would need to use static LAGs (port sharing without LACP or possibly physical ports) to connect the ESXi servers via MLAG.

ESXi does not need to use a LAG for the vSwitch uplinks. If you use a load balancing mechanism that keeps all flows from one VM on one uplink (e.g. based on source MAC or based on source port [of the vSwitch]), you can connect different ESXi server uplinks active/active to different switches. The switches just need to be in the same layer 2 domain (same VLANs).

The Distributed vSwitch is needed to use LACP for ESXi uplinks (Enterprise+ license level). Load Based Teaming (LBT), preferred by many VMware admins, requires the Distributed vSwitch as well.


Extreme Employee
Hey Scott

I am a big fan of using MLAG for the reason you mentioned above. MLAG allows you to have complete failover redundancy and additional bandwidth with the LAG from the end station.

The only con with MLAG is that there is additional configuration needed for the MLAG versus with a stack but I think that the added redundancy is well worth it.

I hope that helps.