cancel
Showing results for 
Search instead for 
Did you mean: 

VOSS switch redundancy

VOSS switch redundancy

Alex221
New Contributor

Hello all,

We are planning a hardware refresh and have purchased four 7520-48Y-8C-FabricEngine switches (version 9.0.5.1).

Two of these will be utilized as Core switches, while the other two will serve as Data Center (DC) switches. The DC switches will be connected to all servers and storage devices. The Core switches are configured with vIST.

I attempted to establish SMLT between interfaces 1/49 and 1/50 on the Core switches, but I cannot use SPBM on those interfaces. If I configure ISIS on 1/49 and 1/50, then SMLT becomes unavailable. I understand that stacking is not possible on VOSS switches, which means we cannot configure both Core switches as a single logical unit.

One potential solution I am considering is to create a second vIST between the DC switches.

Alex221_0-1762333806470.png

 

Could you please advise if there is any way to maintain the current design, or should I go ahead and create a second vIST?

Thanks,
Alex

 

4 REPLIES 4

WillyHe
Contributor II

Hello Alex,

In this setup, I would apply SPBM-FC configuration to all switches.

vIST means virtual IST, which means both SMLT cluster members do not need a direct link between then, the vIST is established over the FABRIC.

You only require a vIST between two switches were you will up-link other devices (native switches, servers, ...) with Link Aggregation to a SMLT configuration on the SMLT cluster(s).
This means when there are NO SMLT connections to the CORE, then do not configured vIST.

I suppose you will connect servers, firewall, ... to the DC switches with an SMLT/LACP-SMLT, then a vIST is required on the two DC switches.

hope this helps

regards
WillyHe

Roger_Lapuh
Extreme Employee

Hi Alex, you need a vIST on the two boxes where the Server is dual homed to, I assume. The Rest of the links are all NNIs. We don't allow NNI config on SMLT ports, as SMLT ports are always UNIs.

EF
Contributor III

I fully agree with Phil,  use a fabric connection between the nodes.

Regarding stacking, to the best of my knowledge, neither VSP model supports stack functionality. However, it is possible to create clusters (virtual IST / vIST) to enable dual-homing topologies towards other networks or end devices.

Phil_
New Contributor III

Hi Alex,

Why don't you use IS-IS NNI links(You don't need MLTs or SMLTS for IS-IS links. MLTs are only recommended if multiple physical connections go from one node to the same other node.) between the core switches and the DC switches? Using vIST requires a fabric anyway. You could also establish a vIST in the DC environment without the need for a crosslink between the two DC switches.

My recommendation is to configure the following:

Core 1:

Spoiler
enable
conf t
interface gigabitEthernet 1/49,1/50
isis
isis spbm 1
isis enable
no shutdown
exit
save config

Core 2:

Spoiler
enable
conf t
interface gigabitEthernet 1/49,1/50
isis
isis spbm 1
isis enable
no shutdown
exit
save config

DC 1:

Spoiler
enable
conf t
interface gigabitEthernet 1/49,1/50
isis
isis spbm 1
isis enable
no shutdown
exit
save config

# +vIST Configuration

DC 2:

Spoiler
enable
conf t
interface gigabitEthernet 1/49,1/50
isis
isis spbm 1
isis enable
no shutdown
exit
save config

# +vIST Configuration

Best regards,
Philipp

GTM-P2G8KFN