cancel
Showing results for 
Search instead for 
Did you mean: 

Looking for a good core type switch for a colo rack

Looking for a good core type switch for a colo rack

Keith9
Contributor III

I'm looking for a good core-type switch for a colo rack which will house things like servers, VPN devices, Internet handoffs, etc..

The uplink to the rest of our private wan would be a 10 gig private fiber ring.  The vendor terminates the ring in their ADVA handoff switch so they see and manage both ends of the ring, and give us one link.

A backup may be an IPSEC tunnel through a palo alto firewall over the internet.

Right now we are building out a X590 stack, 24 port copper, 24 port SFP+, and I have the 40G ports partitioned into 4 ports each with breakout cables.  This is for a DR colo.... just waiting for my second core licence key from my reseller so the stack license mismatch goes away.

In our HQ we have 4 X690s at our core.  1 48 port copper and 1 48 port SFP+ is one core, and we have the same for another core.  They run VRRP and we do MLAG between these stacks.  I want to do simular but we bought these X690s back in 2017, so I just want to size and get the best bang for the buck for a new colo.  The plan is eventually to move our primary server infrastructure at our HQ to a colo facility (one that manages power, cooling, and security like a pro).  But all our WAN and branch connections still will come into our main office for quite some time so we are not moving the X690's.  They aggregate other access stacks throughout the building anyway (5520's).

So to build new in a colo rack or two that's just servers, whats a good quality core switch that can go OSFP, BGP, MLAGs, etc.  Perhaps I could use two 48 port SFP+ switches and populate them with copper SFP's, 1gig SFPs and 10gig SFP+ where needed?  I know the X590 seems to work great with all of those types of SFP's.  Our X690s didn't link up with copper SFP's in them when I tried back in 2017, so I'm not sure if that's a restriction or a firmware limitation (the x590 is new so its up to date).

For storage / iscsi we would probably just move our VM infrastructure over a weekend along with our Aristaa dcs-7280sr-48c6-f pair and continue to keep all storage traffic on separate iron vs data traffic.

We are familiar with EXOS so I'd prefer that, but I'm open to suggestions.

The colo would probably have its own networks.  Its going to be quite a big undertaking to re-ip everything but I dont feel that doing a Layer 2 span across a 10 gig wan is best practice.  Our vendor says its basically dark fiber, we can do anything on it (jumbo frame, layer2/3, vlan trunking, etc.)  They dont touch it.  Its point to point between our HQ, DR colo, furthest branch office, and then back to HQ - but they would insert a primary colo onto this ring for about $1700/$1800 a month.

Thanks for the recommendations.

 

7 REPLIES 7

Keith9
Contributor III

We have the x690s at our HQ location, they seem to be workhorses and I would likely buy them again.  But how we have them now is one all SFP+ switch stacked with one all UTP Copper switch.

At the colocation where we want to move to eventually is going to be mostly servers and wan aggregation, so 90% of our usage will be SFP+ Fiber optic connections.  However sometimes there's a vendor VPN device, or maybe server DRAC's / ILOS / SAN Management, PDU's, etc... that boring stuff that's 1gbps copper.  So we just want to put two 48 port SFP+ and do VRRP and MLAGs and manage them separately for redundancy.  But we'll need a few SFP 1gbps copper ports here and there.

 

I have two X590s prepped for a DR site and they are a single managed stack, one copper, one SFP+.  I put a few copper 1 gig SFP's in the SFP+ switch just so I could have some redundant handoffs (a link to the top and bottom switch) and in testing they seem to be ok.
I have to wonder if the 1gbps sfp's do less heat than the 10gbps SFPs.  I haven't felt much heat personally with the fs.com modules.

jlmangas
New Contributor III

1Gb copper SFP generates less heat than 10Gb copper SFP+.

1Gb copper SFP has a power consumption of 1W while 10Gb copper SFP+ has a power consumption of 2.5W.

I believe you don't have to leave empty slots when using 1Gb copper SFP.

Thanks, this is good to know.  All our 10 gig is fiber SFP+ and all our 1 gig is copper SFP.  I think the all SFP/SFP+ version of the x690 would be a fine switch with the core license for our colo rack where all our servers and connectivity converges should be a good choice.

The iSCSI traffic between storage and vm's are still on low latency Arista switches.

GTM-P2G8KFN