I am faced with the challenge of validating a fairly large switching environment to support uncompressed audio and video streaming for an audio visual distribution system.
To summarise, I have 4 communications rooms which are required to house between 3 and 6 48 port 10Gbps switches. Each of these rooms must be connected to a central "core" switch stack to enable audio and video streams to be routed to any and all other switching locations. All attached endpoint devices shall be connected via 10Gbps SFP+ modules. I have provided simplified diagrams to attempt the explain the switch deployment options under consideration (Design 1 and Design 2).
I am looking to implement redundancy within the core switch stack, so that if a core switch fails, then connectivity between end points remains, albeit potentially with lower bandwidth.
As well as the above, I am looking to minimise the quantities of required QSFP modules and optic fiber runs where possible. The following diagrams are simplified showing:
- Two communications rooms each allocated with 5x Summit X670-48t switches, each fitted with 4x QSFP VIM modules, and:
- The central communications room allocated with 2x Summit X770 32 port QSFP switches, using direct attach cables to achieve a SummitStack 320 (single logical core switch).
Each edge node switch has 2x 10Gbps SFP connections going to each of the X770 core switches
Each communications room "group" of switches is stacked using the rear 4x 40Gbps port VIM modules. A pair of QSFP links attached to the top and bottom switches and then connected to the X770 core switches.
The final layout will feature 2 additional switch groups (not shown).
I am looking for some feedback and advice on the presented design options. In my view, Design 2 is the most efficient design that uses significantly fewer QSFP modules and fibre runs. Perhaps Design 2 should feature an additional pair of QSFP links (to provide 4x 40Gbps between the comms room switch stack and the core switch stack)?
Any comments would be most appreciated.