cancel
Showing results for 
Search instead for 
Did you mean: 

Avaya VSP 7008XLS 8 Port MDA

Avaya VSP 7008XLS 8 Port MDA

Scott_Roy
New Contributor

Hello, 

This question goes out to all Heritage Avaya aficionados. 

I have recently enabled 2 MDA cards on an existing 7024XL switch cluster and then setup LACP connections on ports 27-32 for a new storage project. 

I am not seeing the throughput I would expect to see and wondered if there the card itself is oversubscribed in anyway. 

The onboard ports on the switch cluster are already occupied and I had to rely on the expansion card to accommodate the new connections. 

I am trying to find out if there might be an difference in throughput from the physical onboard ports 1-24 versus the MDA ports of 25-30.

Some ports on the card are showing ‘Dropped on no Resources’, which could be part of the issue if there are a high rate of re-transmissions occurring. 
 

Any help is appreciated. 

3 REPLIES 3

EXTR_Paul
Extreme Employee

It has been brought to my attention from a colleague that the MDA’s back-plane interconnect is three (3) 40GE lanes.  So there is no bottle neck or over subscription when using that module.

 

I have read your port layout description a few times.  I think you need to post this question on the NetApp forum. 

 

But its still unclear to me what kind of performance you are expecting to see. 

Remember that LACP, MLT, SMLT does not aggregate throughput. A One-to-one connection is not distributed over all the links of the SMLT/MLT group.  The Hashing algorithm will only look at the source destination based on MAC or IP.  The algorithm will then choose one(1) of the MLT/SMLT links to send that connection down.  

SMLT will only load-balance One-to-Many and Many-to-many Connections. 

So if your NetApp has thee SMLT/MLT groups, two ports each.  So 6 links in total. The point to point traffic between any two nodes will only traverse down one link. 

 

 

 

 

Scott_Roy
New Contributor

Paul, 

Thanks for your response. 

The upstream devices is a NetApp AFF-220A setup in Active/Active mode and ifgroups split over the 2 clustered VSP 7024’s using LACP. 

In NetApp speak from Node A, ifgrp A0A would contain ports E0C and E0E, which are connected to VSP 1 - Port 4 and VSP 2 - Port 4. LACP setup on Port 4 and SMLT ID of 4 assigned. 

ifgrp A0B contains ports E0D and E0F, I have these split on VSP 1 - Port 6 and VSP 2 - Port 6. LACP setup on Port 6 and SMLT ID of 6 assigned. 

Node B, ifgrp A0A would contain ports E0C and E0E, which are connected to VSP 1 - Port 20 and VSP 2 - Port 20. LACP setup on Port 20 and SMLT ID of 20 assigned. 

ifgrp A0B contains ports E0D and E0F, I have these split on VSP 1 - Port 22 and VSP 2 - Port 22. LACP setup on Port 22 and SMLT ID of 22 assigned. 

 

Is there anyway to get the uplinks from the NetApp to work in an SMLT fashion and load balance both links to aggregate the throughput? 

 

I am still not clear on where the issue resides at this time…. 

 

EXTR_Paul
Extreme Employee

I was looking through some old materials and couldn’t find any info on the lanes and subscriptions.

But the entire switch was 1.28Tbps full duplex, wirespeed with forwarding for 960Mpps.  So the switch will switch…..very fast.

 

With that said, LACP (or LAG) with two or more ports does not multiply your throughput.  A LACP-LAG will balance your traffic if you have lots of connections between the switches.  And it will offer fail-over redundancy should you lose a port.  But if you group 4 10GE ports together its not going to give you 40GB of capacity. If you have one uni-cast path it will all go down one of the 10GE links.

GTM-P2G8KFN