<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic RE: Dual x670V Stacks, MLAG, and VMware ESX in ExtremeSwitching (Other)</title>
    <link>https://community.extremenetworks.com/t5/extremeswitching-other/dual-x670v-stacks-mlag-and-vmware-esx/m-p/12332#M3360</link>
    <description>Hey Ty&lt;BR /&gt;
&lt;BR /&gt;
To be clear you are saying that when the LAG is into one switch and you lose a link it is better than if it is across MLAG?&lt;BR /&gt;
&lt;BR /&gt;
If so how would you handle the redundancy ?&lt;BR /&gt;
&lt;BR /&gt;
Thanks&lt;BR /&gt;
P&lt;BR /&gt;</description>
    <pubDate>Tue, 15 Nov 2016 03:54:00 GMT</pubDate>
    <dc:creator>Paul_Russo</dc:creator>
    <dc:date>2016-11-15T03:54:00Z</dc:date>
    <item>
      <title>Dual x670V Stacks, MLAG, and VMware ESX</title>
      <link>https://community.extremenetworks.com/t5/extremeswitching-other/dual-x670v-stacks-mlag-and-vmware-esx/m-p/12326#M3354</link>
      <description>Pro/Con design considerations for configuring (2) 2-node x670V stacks with emphasis on availability and performance.&lt;BR /&gt;
&lt;BR /&gt;
Prior to submitting for budget approval, I was hoping to get feedback from anyone with experience configuring MLAG's with X670V-48t switches and VMware.  I'm currently running out of available ports and instead of adding just one x670V to my stack, I was looking to possibly add a separate stack and configure MLAGs to our ESX hypervisors w/standard switches.  Right now, I would have to shut down the entire server/storage footprint to update the EXOS software with our single 2-node stack.  Would be grateful to hear comments "in favor of" or "in opposition to" the below configuration.&lt;BR /&gt;
&lt;BR /&gt;
End result:  &lt;BR /&gt;
(2) 2-node x670V stacks&lt;BR /&gt;
MLAG Stack A Port 1:1 with Stack B Port 1:1 for generic server traffic&lt;BR /&gt;
MLAG Stack A Port 1:17 with Stack B Port 1:17 for NFS storage traffic&lt;BR /&gt;
MLAG Stack A Port 1:33 with Stack B Port 1:33 for management traffic&lt;BR /&gt;
Server environment is all VMware with 4 10G and 4 1G NICs per Host&lt;BR /&gt;
&lt;BR /&gt;
Thanks!</description>
      <pubDate>Sun, 13 Nov 2016 08:08:00 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/extremeswitching-other/dual-x670v-stacks-mlag-and-vmware-esx/m-p/12326#M3354</guid>
      <dc:creator>Scott_Benne</dc:creator>
      <dc:date>2016-11-13T08:08:00Z</dc:date>
    </item>
    <item>
      <title>RE: Dual x670V Stacks, MLAG, and VMware ESX</title>
      <link>https://community.extremenetworks.com/t5/extremeswitching-other/dual-x670v-stacks-mlag-and-vmware-esx/m-p/12327#M3355</link>
      <description>Hey Scott&lt;BR /&gt;
&lt;BR /&gt;
I am a big fan of using MLAG for the reason you mentioned above.  MLAG allows you to have complete failover redundancy and additional bandwidth with the LAG from the end station.&lt;BR /&gt;
&lt;BR /&gt;
The only con with MLAG is that there is additional configuration needed for the MLAG versus with a stack but I think that the added redundancy is well worth it.&lt;BR /&gt;
&lt;BR /&gt;
I hope that helps.&lt;BR /&gt;
&lt;BR /&gt;
P&lt;BR /&gt;
&lt;BR /&gt;</description>
      <pubDate>Mon, 14 Nov 2016 07:43:00 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/extremeswitching-other/dual-x670v-stacks-mlag-and-vmware-esx/m-p/12327#M3355</guid>
      <dc:creator>Paul_Russo</dc:creator>
      <dc:date>2016-11-14T07:43:00Z</dc:date>
    </item>
    <item>
      <title>RE: Dual x670V Stacks, MLAG, and VMware ESX</title>
      <link>https://community.extremenetworks.com/t5/extremeswitching-other/dual-x670v-stacks-mlag-and-vmware-esx/m-p/12328#M3356</link>
      <description>Hi Scott,&lt;BR /&gt;
&lt;BR /&gt;
please note that the ESXi Standard vSwitch cannot use LACP, thus you would need to use static LAGs (port sharing without LACP or possibly physical ports) to connect the ESXi servers via MLAG.&lt;BR /&gt;
&lt;BR /&gt;
ESXi does not need to use a LAG for the vSwitch uplinks. If you use a load balancing mechanism that keeps all flows from one VM on one uplink (e.g. &lt;I&gt;based on source MAC&lt;/I&gt; or &lt;I&gt;based on source port&lt;/I&gt; [of the vSwitch]), you can connect different ESXi server uplinks active/active to different switches. The switches just need to be in the same layer 2 domain (same VLANs).&lt;BR /&gt;
&lt;BR /&gt;
The Distributed vSwitch is needed to use LACP for ESXi uplinks (Enterprise+ license level). Load Based Teaming (LBT), preferred by many VMware admins, requires the Distributed vSwitch as well.&lt;BR /&gt;
&lt;BR /&gt;
Erik</description>
      <pubDate>Mon, 14 Nov 2016 20:04:00 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/extremeswitching-other/dual-x670v-stacks-mlag-and-vmware-esx/m-p/12328#M3356</guid>
      <dc:creator>Erik_Auerswald</dc:creator>
      <dc:date>2016-11-14T20:04:00Z</dc:date>
    </item>
    <item>
      <title>RE: Dual x670V Stacks, MLAG, and VMware ESX</title>
      <link>https://community.extremenetworks.com/t5/extremeswitching-other/dual-x670v-stacks-mlag-and-vmware-esx/m-p/12329#M3357</link>
      <description>Thanks Paul and Erik for the quick response!  In regards to NFS storage and static LAGs, in the event an active flow/LAG member would go down, does the vSwitch recover gracefully to the other LAG member or is there a chance of data corruption?&lt;BR /&gt;</description>
      <pubDate>Tue, 15 Nov 2016 00:12:00 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/extremeswitching-other/dual-x670v-stacks-mlag-and-vmware-esx/m-p/12329#M3357</guid>
      <dc:creator>Scott_Benne</dc:creator>
      <dc:date>2016-11-15T00:12:00Z</dc:date>
    </item>
    <item>
      <title>RE: Dual x670V Stacks, MLAG, and VMware ESX</title>
      <link>https://community.extremenetworks.com/t5/extremeswitching-other/dual-x670v-stacks-mlag-and-vmware-esx/m-p/12330#M3358</link>
      <description>Data corruption because of network problems should be prevented by NFS, modulo bugs in the implementations.&lt;BR /&gt;
&lt;BR /&gt;
In my experience, NFS is quite robust. My experience in this regard pertains primarily to classical UNIX and GNU/Linux implementations, as opposed to VMware and storage vendors.</description>
      <pubDate>Tue, 15 Nov 2016 00:12:00 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/extremeswitching-other/dual-x670v-stacks-mlag-and-vmware-esx/m-p/12330#M3358</guid>
      <dc:creator>Erik_Auerswald</dc:creator>
      <dc:date>2016-11-15T00:12:00Z</dc:date>
    </item>
    <item>
      <title>RE: Dual x670V Stacks, MLAG, and VMware ESX</title>
      <link>https://community.extremenetworks.com/t5/extremeswitching-other/dual-x670v-stacks-mlag-and-vmware-esx/m-p/12331#M3359</link>
      <description>I recently did some testing with this scenario and we found that the ESX host worked better if it was just plugged into each of the x670s with no MLAG configuration whatsoever.</description>
      <pubDate>Tue, 15 Nov 2016 03:43:00 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/extremeswitching-other/dual-x670v-stacks-mlag-and-vmware-esx/m-p/12331#M3359</guid>
      <dc:creator>Ty_Kolff</dc:creator>
      <dc:date>2016-11-15T03:43:00Z</dc:date>
    </item>
    <item>
      <title>RE: Dual x670V Stacks, MLAG, and VMware ESX</title>
      <link>https://community.extremenetworks.com/t5/extremeswitching-other/dual-x670v-stacks-mlag-and-vmware-esx/m-p/12332#M3360</link>
      <description>Hey Ty&lt;BR /&gt;
&lt;BR /&gt;
To be clear you are saying that when the LAG is into one switch and you lose a link it is better than if it is across MLAG?&lt;BR /&gt;
&lt;BR /&gt;
If so how would you handle the redundancy ?&lt;BR /&gt;
&lt;BR /&gt;
Thanks&lt;BR /&gt;
P&lt;BR /&gt;</description>
      <pubDate>Tue, 15 Nov 2016 03:54:00 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/extremeswitching-other/dual-x670v-stacks-mlag-and-vmware-esx/m-p/12332#M3360</guid>
      <dc:creator>Paul_Russo</dc:creator>
      <dc:date>2016-11-15T03:54:00Z</dc:date>
    </item>
    <item>
      <title>RE: Dual x670V Stacks, MLAG, and VMware ESX</title>
      <link>https://community.extremenetworks.com/t5/extremeswitching-other/dual-x670v-stacks-mlag-and-vmware-esx/m-p/12333#M3361</link>
      <description>No, I tested plugging one NIC into each x670 in a pair of MLAG/VRRP cores.  It worked best when we just plugged in a NIC into each of the cores in a port with no MLAG or LAG configuration whatsoever. &lt;BR /&gt;
&lt;BR /&gt;
Note we did not Team the NICs together.  The VMWare guys I talked to didn't recommend teaming the NICs on the ESX host.&lt;BR /&gt;
&lt;BR /&gt;</description>
      <pubDate>Tue, 15 Nov 2016 04:05:00 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/extremeswitching-other/dual-x670v-stacks-mlag-and-vmware-esx/m-p/12333#M3361</guid>
      <dc:creator>Ty_Kolff</dc:creator>
      <dc:date>2016-11-15T04:05:00Z</dc:date>
    </item>
    <item>
      <title>RE: Dual x670V Stacks, MLAG, and VMware ESX</title>
      <link>https://community.extremenetworks.com/t5/extremeswitching-other/dual-x670v-stacks-mlag-and-vmware-esx/m-p/12334#M3362</link>
      <description>Hi,&lt;BR /&gt;
&lt;BR /&gt;
it is my impression as well that the VMware guys do not like using LAGs, either static or with LACP. This is opposed to the networking guys that want to use LAGs all the time. &lt;span class="lia-unicode-emoji" title=":winking_face:"&gt;😉&lt;/span&gt;&lt;BR /&gt;
&lt;BR /&gt;
The VMware vSwitch is not a software Ethernet switch, it is something similar, but different. It uses a concept of &lt;I&gt;uplinks&lt;/I&gt; that connect the vSwitch to the network. A frame entering one uplink is never sent to another uplink. It is sent to virtual ports only. Thus redundant uplinks work without grouping them into an LAG.&lt;BR /&gt;
&lt;BR /&gt;
Fail over time in an LAG is usually determined by the time needed to detect a link down situation. LACP (with 30s hellos and 90s hold time) is not used as primary fail over mechanism.&lt;BR /&gt;
&lt;BR /&gt;
Not using a LAG on VMware still uses link down detection as signal to fail over.&lt;BR /&gt;
&lt;BR /&gt;
I would prefer the use of LAGs, but that is from the network point of view, not the VMware one.&lt;BR /&gt;
&lt;BR /&gt;
Erik</description>
      <pubDate>Wed, 16 Nov 2016 16:38:00 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/extremeswitching-other/dual-x670v-stacks-mlag-and-vmware-esx/m-p/12334#M3362</guid>
      <dc:creator>Erik_Auerswald</dc:creator>
      <dc:date>2016-11-16T16:38:00Z</dc:date>
    </item>
  </channel>
</rss>

