cancel
Showing results for 
Search instead for 
Did you mean: 

B@AP or B@EWC to save on large spanned subnets conundrum

B@AP or B@EWC to save on large spanned subnets conundrum

Anonymous
Not applicable
Hi,

Opening this topic up for some advise, and see what others may have done.

Its good practise to building networks without large broadcast domains, so typically keeping say a /24 subnet per stack for Data and Voice. Where I'm coming unstuck is if I have a large building, with multiple stacks I wouldn't therefore wont to have the same VLAN for wireless spanned across all those switches.

I would need this though, so that any APs configured for B@AP, wireless devices keep their IP address as they roams around the campus.

The fix for this is to B@EWC, and create a topology group - which would certainly address the problem.

The conundrum I have is that when creating large networks, and large wireless networks you would need to move into bridging all traffic back to the controller which works in reverse to the contrary of what you would want to do in this situation and bridge traffic directly out of the AP.

Perhaps bridging all traffic on large to very large networks is perfectly fine, so long as you have high availability controllers and distribute the load, maybe even add further controllers in a mobility group?

So just wanted to get peoples opinion on it, and hear about what others have done on large deployments like this.

Many thanks in advance
10 REPLIES 10

Craig_Guilmette
Extreme Employee
The theory that B@EWC is slow is really not true, is B@AP faster maybe but I bet it is not even measurable. Everybody worries about the controller uplink port when most of the time the controller uplink ports are less than 20% utilized. Some of our controllers have 10 gig ports and can be configured in a 20 GIG LAG and all but the virtual controller support static LAG. I would not hesitate to solve your issue with B@EWC rather than horrible Q trunks everywhere or making users change IP's when they roam to an AP that puts them in a different subnet/vlan. The NFL stadiums do B@EWC and they work fine?

That's really good to know Craig. Our installs are mostly on older controllers with 1gb interfaces and we've seen some cases (mostly on WinG) where the controllers run a bit CPU-heavy. The only other consideration is that many of us consider the firewall as the central focal point for security (preventing us from having to look in multiple places to assess policy). In your stadium examples, do you have a finite cap on subnet size?

is another reason the NFL does it that way because they do Analytics.... to have one port with the sensor/flow collector configured.

Anonymous
Not applicable
NFL was something that I hesitated on asking, but think I ended up getting two answers in one!

Thanks Craig.

Eric_Burke
New Contributor III
Martin,

I've got several installs with larger VLAN's spanning multiple buildings without incident. We use an EAPS rung to tie all of our core switches together with LACP handoffs in each direction. We then use /22's on our larger wireless subnets and we B@AP into those same VLAN's spanning the EAPS trunks. We've found that the overhead on the controller is much higher if done the other way and the larger subnet size has minimal impact with respect to broadcast traffic. In the largest deployment, we have a /21 allocated, but that's about as high as I'll go.

Eric (37 acre campus, 150 common-area AP's).
GTM-P2G8KFN