cancel
Showing results for 
Search instead for 
Did you mean: 

Will "disable stacking-support" reset my switch to defaults?

Will "disable stacking-support" reset my switch to defaults?

David_Nelson
New Contributor III

I have have an X590-24x-1q-2c that was previously in a stack but is now in production as a stand-alone switch. I would like to use the QSFP28 ports on the switch but they appear to be reserved for stacking.

 

I believe the solution is to run disable stacking-support, but am concerned that similar to other “stack breaking” activities this will result in my switching being reset to factory defaults on reboot. Can anyone tell me with a significant degree of confidence if this will be the case or not?

 

BTW The documentation in the EXOS 30.6 Configuration Guide for removing a node from a stack is pretty confusing IMO. It says to determine if the node is using SumitStack-V feature by running “show stacking stacking-ports” command to see if stacking ports on the target node are using alternative stacking ports, and to run “unconfigure stacking-support” if alternative stacking ports are in use. On the X590 the output of this command shows “Native Stacking” with nary a mention of “alternative” anything, so I did not run unconfigure stacking-support, and now it seems I can’t use these ports for uplinks because of it. 😕

 

Anyhow so that’s how a got here if you wanted the backstory.

 

Thanks
David

 

8 REPLIES 8

FredrikB
Contributor II

 

Any guidance on what is sufficient bandwidth for an MLAG pair, or where to find this info since I imagine it must be calculated / estimated based on the switch and the capacity of the ports that will run MLAG? I would have absolutely thought forming a LAG with two 100Gb QSFP28 ports would perform well for the IST between MLAG peers.

 

Have a great day!
David

 


Well, the issue is a bit complicated, but I’ll give it a shot 🙂 First of all, I would like the uplink to be 40 or 100 G. If you only need n x 10 G, you’re fine using the 40/100 G ports for ISC/IST (IST is the Avaya/VOSS name for it). If you want one 40/100 G uplink from each X590, you only have one left on each switch. Those would then be used for the ISC link.

We actually struggled with Extreme to get them to officially verify if a non-full mesh uplink like this is supported from their side. All examples in the docs are(/were?) with a full mesh uplink structure between two MLAG pairs. The docs also stated that this was the (only) way to do it. I think our conversation with them was what lead to an official approval for the simpler type of uplinks where (in your case) each X590 would have one uplink to only one of the switches in the MLAG pair in the core.

This type of simpler MLAG to MLAG connection means that traffic destined to something connected to the X590s can come to the “wrong” X590 if the destination MAC is terminated with a simple tagged or untagged port as opposed to a LAG. If it’s connected via a LAG (say an access switch), this is not a problem. The problem is with things like VMware hosts where they do not run LACP LAGs but VMware’s proprietary load balancing where hosts are more or less randomly “patched” to one of the physical ports. This means that a lot of traffic is sent over the ISC link. This also applies to stand-alone servers that are connected to a single switch (X590 in this case).

Ok, so this is not a big problem because we still have 100 G between the switches, right? Yup, correct. So for redundancy, we’d like a second ISC, but we don’t have the interfaces for that. We then need to resort to an “alternate ISC” (it’s in the docs...). The problem with this solution is that if the ISC goes down, for whatever reason, one of the switches in the MLAG pair disables all the …..[drumroll]…. MLAG ports, not ALL ports! The VMware host still thinks all ports are good for sending traffic to and from!

So what about the full mesh setup, then? We still have the interfaces for that, even if we then only have 10 G ports left for the ISC, and two should be enough for redundancy, right? Well, yes, not much traffic should flow in the ISC, but what traffic will? Let’s say one 100 G uplink goes down, this means traffic flows over the ISC, bit that’s in an error scenario, so not too much to worry about for most people. Again, the problem is with the VMware hosts and other things without a proper LAG to connect to the X590s. The upstream switch will send packets to host A to any of the X590s. If host A is connected with a LAG to the X590s, it’s OK, but A is connected to X590 #! and the packet was sent to #2. #2 needs to send it to #1, and that’s via the ISC! We see massive packet drops due to buffers full very rapidly in this scenario. Do a “show ports congestion” to verify. The server guys were very unhappy! Why is that? Well, you feed the X590s with a 40/100 G link and you can easily burst packets at line rate there, but even with 4 x 10 G in the ISC and 40 G uplinks, you’re not safe by far. The LAG hashing/distribution algorithm is not designed to take into account which port in the LAG is not loaded at the moment, so it may well decide to send packets into a 10 G link that is currently dropping packets already. Read up on micro bursts if you don’t understand how this can happen (or PM me 😉 ).

Wow, I’ll probably get banned for posting an essay to answer a simple question 🙂 I hope this helps!

David_Nelson
New Contributor III

After moving connected devices to another switch I went ahead and ran disable stacking-support and rebooted the switch. It did not default the config, and the QSFP28 ports I want to use have changed status from Not Present to Ready so I think I have accomplished my goal.

 

I would not say this means this command can’t cause a switch to default. I believe in this case it did not because the switch was not running a stacking config, i.e. where the port references in the config include the slot, ex. 1:2. If it had been running a stacking config I expect on booting up with stacking-support disabled the switch would have had no option but to load a default config.

 

Thank you to all of you who contributed here.

 

Have a great day!
David

David_Nelson
New Contributor III

 

Any guidance on what is sufficient bandwidth for an MLAG pair, or where to find this info since I imagine it must be calculated / estimated based on the switch and the capacity of the ports that will run MLAG? I would have absolutely thought forming a LAG with two 100Gb QSFP28 ports would perform well for the IST between MLAG peers.

 

Have a great day!
David

 

David_Nelson
New Contributor III

FredrikB,

 

I agree, here is some hopefully relevant output from the switch:

 

X590-IDF9.1 # show stacking

Stack Topology is a Daisy-Chain

This node is not in an Active Topology

Node MAC Address    Slot  Stack State  Role     Flags

------------------  ----  -----------  -------  ---

*00:04:96:cd:b1:62  -     Disabled     Master   ---

* - Indicates this node

Flags:(C) Candidate for this active topology, (A) Active Node

(O) node may be in Other active topology

 

X590-IDF9.2 # show stacking-support

Stack    Available Ports

Port    Native  Alternate  Configured  Current

-----   -----------------  ----------  ----------

1       Yes *   No         Native      Native      

2       Yes *   No         Native      Native      

stacking-support:          Enabled     Enabled   

Flags: * - Current stack port selection

 

Just a bit of output from show config, so you can see the port references are not in “stack format”:

X590-IDF9.3 # show config module vlan

# Module vlan configuration.

#

configure vlan default delete ports all

configure vr VR-Default delete ports 1-36

configure vr VR-Default add ports 1-36

 

Thank you. I appreciate your help.

David

GTM-P2G8KFN