Header Only - DO NOT REMOVE - Extreme Networks

EXOS create lag for 2 switches


I have 2 new EXOS switches (X620-16x) that are going to be used for ISCSI traffic for a Nimble SAN. I am new to EXOS and I am still learning EOS so I'm new to all of it. The Nimble device recommends that the switches are "trunked" together and that is what it says in the health check in the admin console. I assume it means that the switches need a LAG port or sharing enabled. How would I accomplish this for these two switches? I have been looking at the commands in the EXOS Commands pdf that I found but I am not sure about some settings like which algorithm, which ports are included in the LAG, etc. Could someone help me out or give a good example of this type of configuration?
Thanks in advance.

5 replies

Userlevel 2
Hi Phillip,

You can use MLAG. Here is the docs.

https://gtacknowledge.extremenetworks.com/articles/How_To/How-to-configure-MLAG-in-Extreme-switches

Other way, you can make both switches as stacked switches by using SummitStack-V. But, with this way, the stacking links are limited to 2x 10Gbps links. Here is the doc.

http://documentation.extremenetworks.com/summit_16/Summit_Family_HW_Install/Stacking/c_using-the-sum...

Best regards,
Philip, I would tend towards connecting the Nimble with a dual subnet design. Your iSCSI-A subnet would live on X620-A and your iSCSI-B subnet would live on X620-B. Each host and Nimble controller would have an interface on each iSCSI subnet/switch. The hosts' MPIO setup would recognize multiple paths and make use of them accordingly. In a configuration like this you would need no uplink at all between the iSCSI switches. MLAG (or LAG at all) is generally not used for iSCSI in favor of MPIO. MLAG would be a great solution for NFS (file-level) connections to storage systems where you're relying on the network to do load distribution across storage connections. Hope this helps.
I have to get back on the horn with Nimble because there is another question I need to ask them, but I think what they want is for the connections from each of the 2 controllers in the SAN to be connected to both of the switches in a LAG. I see what your saying about the dual subnets, but Nimble recommends having eth1 on each controller connected to the LAG or as they say trunked together. I will work on this more after I talk with them about my other issue.
Philip, Check out Page 6 of Nimble's "VMware vSphere®️ 6 Deployment Considerations" guide at https://cdm-cdn.nimblestorage.com/2016/12/08171022/Nimble-VMware_vSphere6-DeploymentConsiderations.pdf With either recommended iSCSI connection method (single-subnet or dual-subnet), you will NOT be connecting the ESXi hosts to the storage switch(es) with LAGs. For a single-subnet setup you would need to connect the two storage switches together (probably with a LAG) but you wouldn't be connecting either the ESXi hosts or Nimble ports to the switches via LAGs, therefore you would not be using MLAG.
I talked to Nimble support today because I had some other questions as well as this and they told me there is no need to "trunk" anything. The health status message that I saw on the controller did not give enough information in order for me to know what it was talking about. I just needed to swap some cabling around to get each controller and NIC in each controller connected to the correct switch port and make sure they are all tagged on the VLAN for ISCSI traffic. No LAG/MLAG is needed. I rectified the situation and it no longer gives me the health status alert. Now I can move on to fixing another issue with one of the switches.

Reply