cancel
Showing results forĀ 
Search instead forĀ 
Did you mean:Ā 

Network Design Considerations for Summit X670-48t (edge) and Summit X770 (core) Switches

Network Design Considerations for Summit X670-48t (edge) and Summit X770 (core) Switches

Benjamin_Woenig
New Contributor II
Hello,

I am faced with the challenge of validating a fairly large switching environment to support uncompressed audio and video streaming for an audio visual distribution system.

To summarise, I have 4 communications rooms which are required to house between 3 and 6 48 port 10Gbps switches. Each of these rooms must be connected to a central "core" switch stack to enable audio and video streams to be routed to any and all other switching locations. All attached endpoint devices shall be connected via 10Gbps SFP+ modules. I have provided simplified diagrams to attempt the explain the switch deployment options under consideration (Design 1 and Design 2).

I am looking to implement redundancy within the core switch stack, so that if a core switch fails, then connectivity between end points remains, albeit potentially with lower bandwidth.

As well as the above, I am looking to minimise the quantities of required QSFP modules and optic fiber runs where possible. The following diagrams are simplified showing:
  • Two communications rooms each allocated with 5x Summit X670-48t switches, each fitted with 4x QSFP VIM modules, and:
  • The central communications room allocated with 2x Summit X770 32 port QSFP switches, using direct attach cables to achieve a SummitStack 320 (single logical core switch).
Design 1:
Each edge node switch has 2x 10Gbps SFP connections going to each of the X770 core switches

69c3016612a647f78c986490b0116bf1_RackMultipart20161117-114647-10xs7d8-Network_Example_-_without_stacking_inline.jpg



Design 2:
Each communications room "group" of switches is stacked using the rear 4x 40Gbps port VIM modules. A pair of QSFP links attached to the top and bottom switches and then connected to the X770 core switches.

69c3016612a647f78c986490b0116bf1_RackMultipart20161117-71608-n4lu0e-Network_Example_-_with_stacking_inline.jpg



The final layout will feature 2 additional switch groups (not shown).

I am looking for some feedback and advice on the presented design options. In my view, Design 2 is the most efficient design that uses significantly fewer QSFP modules and fibre runs. Perhaps Design 2 should feature an additional pair of QSFP links (to provide 4x 40Gbps between the comms room switch stack and the core switch stack)?

Any comments would be most appreciated.

Thanks,
Benjamin

5 REPLIES 5

Mrxlazuardin
New Contributor III
Hi Benjamin,

I think it is better to spread 40gbE links and still can have them bonded (because the core switches are stacked and edge switches too) to all edge switch, one or two per switch. It will give you the same capacity and QSFP modules consumption (if the total is still four per Comms Room) like design 2, but you will not depend to ifrst and last edge switch only in case one of them or both are down.

Best regards,

Benjamin_Woenig
New Contributor II
Hi all,

Thank you everyone for the responses, this has been most helpful.

All streaming is undertaken via multicast. Each edge switch will have a number encoders and decoders attached that will enable audio/video sources to be transmitted to destination decoders. The majority of AV streams will be within a given VLAN, within the same physical switch. The solution is required to be able to transmit an AV stream from one VLAN to another so that another room may view video footage. This will be on an ad hoc basis and likely not exceed 2-3 video streams at any one time, however we need to cover the worst case scenario which would be 20-30 simultaneous VLAN-VLAN streams. I understand that only L3 redundancy is required. I don't believe AVB is required for this solution to function, so we are reliant on PMP and IGMP to be configured correctly.

I have updated the (simplified) diagram for Design 2 which now includes 4x 40Gpbs links from each comms room stack to the central core.

c7a3b2dd53b943e2bd87d7c4841becea_RackMultipart20161118-16912-pnvr5t-Network_Example_-_with_stacking_-_updated_inline.jpg



I think we will go with design 2 as it will use a lot less fiber cable and QSFP modules.

Let me know if there is anything else I need to consider.

Regards,
Benjamin

Paul_Russo
Extreme Employee
Hello Benjamin

I think both designs will give you redundancy and bandwidth. I do have some thoughts that may help in your planning.

You mention Audio and Video streaming, if that is straight multicast then just having the proper PIM and IGMP configured will work fine. If however you plan on using AVB then this design will not work as AVB is not supported on stacks.

I like option 1 with the change to use MLAG. In your current designs if you need to upgrade code you need to down the whole stack and potentially add some disruption to the network. With MLAG you can reboot either core and not see that outage. You can also do 2 tier MLAG where a server can connect to two separate TOR switches so that if either one goes down the server is up.

If MLAG is not possible I like Option one as it provides a more point to point fabric design.

I hope that helps

P

Edward_Tsui
Extreme Employee
Hi Benjamin,

Not sure you need to provide L2 or L3 redundancy? If for L2, both designs need a loop prevention protocol like STP or MLAG to achieve your goal. You may consider using MLAG in your design and as it is a new technology and can provide a very fast convergence time. For design 2, you add one more link from each edge switch (to provide 4x 40Gbps between the comms room switch stack and the core switch stack). Then, form LAG for each pair of link and run LACP over MLAG.
GTM-P2G8KFN