Wing 5.8 - Cluster - Config

  • 0
  • 1
  • Question
  • Updated 1 week ago
I have tow RFS units setup in a cluster, today I failed the primary over to ensure that the backup took over and then failed back at the times set, which it did, I noticed that on one of the WLAN clients could not connect, when the Primary came back online and the AP's moved back to the primary the clients could connect, so there is some issue with the config, If the cluster is taken down and then recreated will this push the config from the primary to the backup, or is there a command line that I could use to ensure the backup does have the most up to date version that works on the primary  ? 
Photo of Phil storey

Phil storey

  • 1,254 Points 1k badge 2x thumb

Posted 1 week ago

  • 0
  • 1
Photo of Michael (Misha) Elin

Michael (Misha) Elin, SE

  • 342 Points 250 badge 2x thumb
Hello,
Please check on both units the client target vlan is available. "sh interface switchport" should display exactly the same results on the both cluster members. Consider to use local bridging instead of tunnelling - this removes controller from data path.

Misha
Photo of Timo Sass

Timo Sass

  • 100 Points 100 badge 2x thumb
The config is just pushed, if you have a config mismatch. In that case the controller with the higher cluster id.

If you do no changes doing the down time, no config is pushed.

One of the clients sounds, all other works? Sound more like a client problem for me. Do you check the log?
Photo of Phil storey

Phil storey

  • 1,254 Points 1k badge 2x thumb

Hi Timo - it was all clients on only one wlan, soon as the primary was backup and running and it became active all clients connected.


Hi Michael
     you mention local bridging instead of tunnelling, are you referring to the wlan  bridging mode ?
Photo of Timo Sass

Timo Sass

  • 100 Points 100 badge 2x thumb
Do you have some device overwrites on the RFS, that maybe produce the problem? Or maybe the VLAN isn't available from the LAN site of the backup controller.
Photo of Phil storey

Phil storey

  • 1,254 Points 1k badge 2x thumb

Hi The VLAN is available, not sure what you mean " some device overwrites on RFS "
Photo of Timo Sass

Timo Sass

  • 100 Points 100 badge 2x thumb
connect to booth RFS via CLI and type:
enable
self
show contex

This show you local overwrites on each RFS. Maybe the interface have a local overwrite.
Photo of Phil storey

Phil storey

  • 1,254 Points 1k badge 2x thumb

Hi
 there are some differences between the primary and the backup, the cluster is up and I can see the members ( 2 )  operational state is active - the primary is the master and the backup is false. is there a command that will refresh the config on the backup or to pull the config in from the primary ?
Photo of Carlos Assunção

Carlos Assunção

  • 294 Points 250 badge 2x thumb
Hi,

How did you create the cluster for the first time.

Best Regards
Photo of Christoph S.

Christoph S., Employee

  • 2,764 Points 2k badge 2x thumb
Hello Phil,

When creating the cluster using the join-cluster command, the primary config is mirrored on the standby cluster member. Run the show running config on both and make sure that this is the case. 

If you have multiple vlans, I would start by checking to make sure that all those vlans are allowed through the switchport the standby controller is connected to. Might be that that specific VLAN the WLAN is mapped to is not allowed out of the either the port on the standby controller or the switch port the standby controller is connected to.

If you feel that this is a clustering issue, you might want to try out this: https://gtacknowledge.extremenetworks.com/articles/How_To/How-to-disable-and-recreate-a-non-working-...