NLB with Extreme

  • 0
  • 1
  • Question
  • Updated 3 years ago
  • Answered
Create Date: Jul 30 2013 12:54PM

We just installed two X670V-48X switches and stacked them together with two of the sfp ports.  Everything is running great so far.

Slot-1      : XXXXXX-XX-XX XXXXX-XXXXX Rev 6.0 BootROM: 2.0.1.6    IMG: 15.3.1.4  
Slot-2      : XXXXXX-XX-XX XXXXX-XXXXX Rev 6.0 BootROM: 2.0.1.6    IMG: 15.3.1.4

We now want to run NLB for an ADFS server.  We want to run it in multicast mode.  Binding a unicast IP address to a multicast mac address.

10.0.0.1 to 03:bf:82:aa:23:01  (Please note that the IP/mac have been changed from the actual addresses for this post)

The ADFS server is running on VMware.  We need to be able to connect to this ip address from across subnets.  We use ospf for routing.  I haven't setup pim or any other kind of multicast routing yet, becuase we need to bind a unicast ip address to a multicast mac address I don't even know if multicast routing is even needed.  I saw a couple of posts from 2011 that said Extreme Switches can't handle NLB, however being old posts I question if that is still true or not.

The following command:

configure iparp add 10.0.0.1 03:bf:82:aa:23:01

adds the info into the system.  I am able to ping the ip address as well.  When I type in: 'show iparp 10.0.0.1' it does return the entry.  However when I type in the commands:

show iparp
show fdb

it doesn't show what port the mac address is on.  Below you can see a cut and paste from the 'show iparp' command.  The switch doesn't show me what port the host is on, while it does show what another server is on.

VR-Default    10.0.0.1      03:bf:82:aa:23:01   0     YES  District      2     
VR-Default    10.0.0.2      00:50:56:96:00:09    1      NO  District      2     1:17

As well the 'show fdb' doesn't even return an entry.

show fdb 03:bf:82:aa:23:01
Slot-1 Stack.15 # show fdb 03:bf:82:aa:23:01

Mac                     Vlan       Age  Flags         Port / Virtual Port List
------------------------------------------------------------------------------

Flags : d - Dynamic, s - Static, p - Permanent, n - NetLogin, m - MAC, i - IP,
        x - IPX, l - lockdown MAC, L - lockdown-timeout MAC, M- Mirror, B - Egress Blackhole,
        b - Ingress Blackhole, v - MAC-Based VLAN, P - Private VLAN, T - VLAN translation,
        D - drop packet, h - Hardware Aging, o - IEEE 802.1ah Backbone MAC,
        S - Software Controlled Deletion, r - MSRP



Any help would be greatly appreciated!

Thanks.

b

(from bw447)
Photo of EtherNation User

EtherNation User, Official Rep

  • 20,340 Points 20k badge 2x thumb

Posted 4 years ago

  • 0
  • 1
Photo of EtherNation User

EtherNation User, Official Rep

  • 20,340 Points 20k badge 2x thumb
Create Date: Aug 1 2013 12:19PM

*****UPDATE******

We did more testing by failing over a server that are on our NLB cluster.  Turns out it works very well.  We don't miss a beat.  However by only using the iparp command the requests go to all ports on our stack.  I then go ahead and create an entry into fdb create:

create fdb <mac> vlan <vlan-Name> <port-list>

Once I create the entry we lose connectivity.  That's not good.  I then delete the fdb entry and connectivity is up, but it's still passing the multicast mac address to all ports.
(from bw447)
Photo of EtherNation User

EtherNation User, Official Rep

  • 20,340 Points 20k badge 2x thumb
Create Date: Aug 7 2013 6:25AM

Dear bw447:

I have the same problem of NLB on extreme.

I create the iparp CLI on Extreme x670-48v, and then generate duplicate packet on our network. 
duplicate packet == iparp IP (multicast MAC)
ex:
configure iparp add 10.0.0.1 03:bf:82:aa:23:01

x670-48v (stacking) --agg. Switch --- Switch (inc. NLB function) --- mail server * 2   


(from jerry.clc)
Photo of EtherNation User

EtherNation User, Official Rep

  • 20,340 Points 20k badge 2x thumb
Create Date: Aug 12 2013 1:47PM

Hi Jerry.clc

Sorry for the late reply.  I'm looking into this problem.  Are you doing this for Exchange 20..?  

Have you tried turning off igmp on your 10.0.0.1 vlan?  We are going to give this a try, but haven't.  Bigger problems not related to this have come up that require our teams attention.  Once I try igmp I'll let you know how it goes.

Thanks

bw 

(from bw447)
Photo of bw447

bw447

  • 906 Points 500 badge 2x thumb
I'm going to resuscitate this thread as we are running into issues with our NLB. The settings haven't changed at all on the switch.

We still only have the iparp command binding the unicast ip to the multicast mac

However now the servers and cluster aren't always happy. Is this still the best way to setup an NLB on an Extreme switch? I would be interested in also finding out if it's possible to put in an fdb entry for the multicast mac saying which ports it should use.
Photo of Grosjean, Stephane

Grosjean, Stephane, Employee

  • 12,532 Points 10k badge 2x thumb
First of all, what version of NLB are you using?

While the first implementation (long ago) was binding a unicast mac to the virtual server, NLB has been known for a long time to bind a multicast mac to the virtual IP (03:BF:xxxxxxx). Most of the old thread on the NLB subject are more certainly dealing with that configuration.

But since Windows Server 2003, the multicast mac is now in the form 01:00:5e:xxxxx. This can change some behavior.

Another question: who's the router? If the gateway is also physically the same device than for switching, this implies a few more tricks. Then the solution also depends on what type of switch and what EXOS version you are running.

With NLB there're usually 2 issues:
- flooding at L2 because switch never learns the virtual mac
- no ARP resolved on the router because NLB violates RFC1812 (section 3.3.2)

1]

if your design implies several tiers, the static ARP entry on the router is required

for example (considering cluster IP is 10.2.2.2 and multicast mac is 01:00:5e:00:00:01):
X480.1 # configure iparp add 10.2.2.2 01:00:5e:00:00:01
Warning: MAC (01:00:5e:00:00:01) is a multicast address

and on the switch(es) you can create also static fdb entries pointing to the list of ports where your Microsoft servers are. If you are using VM, scripting might be a good solution to make that more dynamic.

for example (assuming servers are on vlan v2):
X480.2 # create fdbentry 01:00:5e:00:00:01 vlan v2 ports 22,23

But that's not all. Because by default a L3 IPMC Lookup happens, the fdb entry is not used because the traffic is deemed as error. So, you need to either:
- disable IGMP snooping, or
- configure EXOS for L2 IPMC lookup:

X480.3 # configure forwarding ipmc lookup-key mac-vlan

This command requires at least EXOS 15.3.1. This configuration cannot be used with MVR, IGMPv3, PIM, PVLAN.


2]

The L2 and L3 part are on the same device (L3 switch).
This is trickier.

The ARP entry is still mandatory.

There we need to know what hardware you are using, to perform the right configuration to achieve linerate performance. On "recent" hardware (more recent than x250e, x450a/e, basically), we can do the following:

The fdb entry should be on a single port, then we create a redirect-port-list ACL (assuming servers are on vlan v2 and users on vlan v1):

X480.5 # create fdbentry 01:00:5e:00:00:01 vlan v2 ports 22
X480.6 # create access-list nlb "destination-address 2.2.2.2/32;" "redirect-port-list 22-23;"
X480.7 # configure access-list add nlb first vlan v1
done!

It requires EXOS 15.2.1 or above, with the right platforms, to work.

For other hardware, there's another trick.
Photo of bw447

bw447

  • 906 Points 500 badge 2x thumb
Hello Stephane,

The router is a stack of 3 x670v-48x switches are we are running as an L3 switch. Both the L2 and L3 are all on the same device.

We are running version EXOS 15.3.3.5
Our servers are Windows 2012 R2
vmware 5.5 build 1881737

As for the version of NLB, I asked the SysOps team and they couldn't tell me.

The key problem is that the server loses communication with the cluster, which I don't blame on our Extreme switch at all. I'm just making sure that everything is working properly on the networking side, trying to stay ahead of the game.

I am interested in setting up an ACL to limit the ports that the nlb multicast traffic would go on. We have 8 vmware hosts, all of which could "host" the nlb cluster vm. That's why we are passing the nlb traffic down all the ports. How do you think we could limit based on not really knowing where the nlb cluster would be located at any given time.

Thanks for your help!

-Blake
Photo of Grosjean, Stephane

Grosjean, Stephane, Employee

  • 12,532 Points 10k badge 2x thumb
you should apply a similar ACL than the one in the 2] part. Even if you put all 8 ports in it, that should be proper than leaving it by default.

This conversation is no longer open for comments or replies.