MLAG/ISC/Teaming-Bonding

  • 0
  • 2
  • Question
  • Updated 4 years ago
  • Answered
Hello,

we have Windows servers with that nifty Broadcom "Teaming" option as well as Linux CentOS servers with "Bonding" - and of course VMWare hosts with their virtual "switches". Now to set them up properly for both speed aggregation and/or redundancy I need a sanity check please. (EXOS version 15.4.x.x / 15.5.x.x)

(1) Easy scenario: If the team plugs into the same switch, all I need to do is LAG the two ports (sharing/grouping), right? I have both speed aggregation and redundancy.

(2) If the team plugs into two different switches that are connected via an ISC link, do I treat it as a regular MLAG setup? I.e. vlan ISC, add the LAG port, give IP addresses, create mlag peer, enable mlag on port (grouped, if we're going to extremes), all done? (See Concepts Guide example page 294 or thereabouts) Do I still have speed aggregation and redundancy?

(3) Now for the tricky part. Let's say the team plugs into two different (Summit) switches that do NOT have an ISC between each other, but those two switches are lagged to two switches (BlackDiamonds) who have MLAGs to those edge switches (Summits) defined. Kinda your 'standard'(?) two-tier scenario.
What do I do in this case (3) ? I think I have proper redundancy, how about speed aggregation? Do I need to configure anything interesting on the Summits? BDs? Anywhere?

In all those 3 scenarios, do things change depending on if I use Windows/Broadcom Teaming vs. Linux Bonding or VMWare's virtual switch? In the case of VMWare, I presume I just treat it like a regular switch, though, like scenario (3).

And two general questions. The ISC vlan, can I put it on its own separate Virtual Router, like "VR-ISC", just to make sure I don't accidentally enable ipforwarding and route things to it?

Regarding the MLAG-ID, that's just an ID that's unique ID per switch, but has to be the same on the peering switches, right? I'm second-guessing myself after reading this statement "...and an "mlag-id" which is used to reference the corresponding port on the MLAG peer
switch..." in the Concepts Guide.

Thank you!

   Frank
Photo of Frank

Frank

  • 3,776 Points 3k badge 2x thumb

Posted 4 years ago

  • 0
  • 2
Photo of rbrt_weiler

rbrt_weiler

  • 834 Points 500 badge 2x thumb

Hey Frank,

First of all: There is no real speed aggregation. LAGs work with some sharing algorithm, but that's session based. E.g. if you have a server connected by 4x 1G, the maximum speed for a single session is still 1G.

 Scenario 1: You're right. Simple sharing, done.

Scenario 2: The two switches that the server is connected to need to be MLAG peers. Than you'd need to add the ports the server is connected to to a MLAG, using the same MLAG ID on both switches.

Scenario 3: In the worst case you've got yourself a loop. Don't do that.

For all scenarios: You _have_ to talk to your server guys. Teaming can be anything, same for bonding. That depends on whatever they configure on their server(s).

  • The most simple thing they can configure is active-standy. Works for every scenario and you don't have to do anything at all.
  • They can also configure static active-active. Works for scenario 1 and 2. You have to configure a static (M)LAG on your switch(es).
  • My preferred solution is to configure LACP on both sides. This basically adds a control protocol to a LAG and should also work with MLAG. Both the server guys and yourself have to configure that on the server(s) and the switch(es). Again, valid for scenario 1 and 2.

Those points are true for Windows teaming, Linux bonding and VMware vSwitch.

Regarding your general questions:

I honestly don't know if you can put the ISC in a seperate router, but I don't think so.

MLAG IDs have to be the same on both switches in order to recognize that's in reality it's one host connected to two different switches.

Photo of Frank

Frank

  • 3,776 Points 3k badge 2x thumb
Thanks for your reply!

No true speed aggregation?! That's a bit of a bummer - I was hoping for better throughput to our backup servers. Well, I guess two sessions at 1GB each is still better than before - just not as good as one session at 2GB. Unless I misunderstand 'session'. Possibly (probably?) my fault for not quite exactly understanding Link Aggregation (like I also thought that lacp was enabled by default. Which it is not.)

As to the ISC vlan being in a separate VR - that may have been a stupid question. As soon as you separate VLANs into different VRs (for whatever reason), and have those various VLANs across multiple 'Summit' switches, you'll basically have to have a VR agnostic ISC link or you'd be in trouble really fast. Don't know how to prove/test this, though right now.

Scenario 3 - wouldn't it be sufficient to say "enable sharing X grouping X lacp" on both switches and let lacp sort it out in a Windows/Linux (I can see the loop problem in vmware virtual switches) teaming/bonding environment? In the worst case, my switches should start generating at least a nice warning, right (and hopefully block a port) in the default config.
Apologies if my ignorance shows - I'm trying not to fall off the learning curve.

Photo of rbrt_weiler

rbrt_weiler

  • 834 Points 500 badge 2x thumb

If you want more speed just upgrade to 10G, 40G or 100G ;-)

For scenario 3 configuring seperate LAGs - with or without LACP - wouldn't help. Possible scenario:

  • one server with 4x 1G links
  • two 1G links to switch A
  • two 1G links to switch B
  • each switch has a seperate LAG configured

What happens when you configure a LAG is that the switch basically uses the same MAC address for both links (at least when using LACP, not 100% sure for static LAGs). In the scenario above let's say that switch A uses the MAC AA:AA:AA:11:11:11 and switch B uses BB:BB:BB:22:22:22.

Look at that from the server side. He's talking to two different devices! Cannot work unless you are going to configure two teamings/bondings on the server and then use those two _logical_ links for an active-standby solution.

Let's say that you make both switches known as MLAG peers and ...

  • ... configure sharing on switch A.
  • ... configure sharing on switch B.
  • ... configure MLAG with the same ID for those links.

The simple effect for the server is that it's now talking to, for example, MAC CC:CC:CC:33:33:33 on all four ports. This simple fact in turn enables your server guys to configure a single teaming/bonding using all four ports, with or without LACP (remember - talk with them, has to be the same on switch and server).

That's LAG and MLAG, at least how I understand it. Corrections are accepted at any time.

Photo of Frank

Frank

  • 3,776 Points 3k badge 2x thumb
Thank you for your explanation. I had another cup of coffee and thought "oh my, scenario 3 is stupid. There's no way that could ever work right!" - and you explained it much better!

As to LACP (with or without) - at least talking to the Linux guy is easy - that's me - which also makes testing and playing with it a lot easier. I'm trying to keep things simple and get by without using LACP - at least on the switch-to-switch connections.
Photo of Naoman Nabil Ghani

Naoman Nabil Ghani

  • 80 Points 75 badge 2x thumb
VMware doesn't need LACP with switch MLAG.

You can use multiple ports with vswitch load balancing (across one or more switches) and use VMware beacon monitoring to detect path failures.

Photo of Alex Joda

Alex Joda

  • 200 Points 100 badge 2x thumb
We also use MLAG to connect VMware servers to our two cores because we noticed that with MLAG the connection between two devices (e.g. two servers or edge switches on the net) are always using the shortest path without using the ISC connection for network traffic. If possible the connection will be always created on the same switch or using the "right" uplink from the edge switch. 

I am not sure that this also works with vswitch load balancing because the switch is not able to decide which MLAGs are having the shortest path any more. Maybe someone can explain this a little deeper....
(Edited)