<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Network Design Question in Network Architecture &amp; Design</title>
    <link>https://community.extremenetworks.com/t5/network-architecture-design/network-design-question/m-p/14122#M1526</link>
    <description>Hi,&lt;BR /&gt;
&lt;BR /&gt;
Have a network design question. Basically we have 4 x X690's, two are in one DC and 2 are in another DC.&lt;BR /&gt;
&lt;BR /&gt;
What the requirement is would be to LAG within the DC and across the DC's. A fabric solution would be perfect for this, but unfortunately this isn't an option.&lt;BR /&gt;
&lt;BR /&gt;
One option would be to stack the X690's within the DC's and create an MLAG between the DC's, but I don't wont to using stacking in the core. Main reasons are that servers will be split within the DC so I wont to be able to upgrade the switches without resetting the whole stack, I would like to take advantage of both their control planes, and using MLAG makes use of all the switch CPU's evenly (fabric routing, slip VRRP).&lt;BR /&gt;
&lt;BR /&gt;
Anyway, this is what I've come up with and would like to validate with the community or see if anyone has some better or other suggestions:&lt;BR /&gt;
&lt;BR /&gt;
&lt;P class="fancybox-image"&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="a6f3168215754d4c89108d11fc1d61c6_RackMultipart20180518-77395-j4nr9p-Core_Design_inline.png"&gt;&lt;img src="https://community.extremenetworks.com/t5/image/serverpage/image-id/5376iBB123D1DB5E0AC6C/image-size/large?v=v2&amp;amp;px=999" role="button" title="a6f3168215754d4c89108d11fc1d61c6_RackMultipart20180518-77395-j4nr9p-Core_Design_inline.png" alt="a6f3168215754d4c89108d11fc1d61c6_RackMultipart20180518-77395-j4nr9p-Core_Design_inline.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;BR /&gt;
&lt;BR /&gt;
So here you have an ISC between each of the switches, and also OSPF P2P's between each switch. VRRP is split evenly between switches and fabric routing is being used.&lt;BR /&gt;
&lt;BR /&gt;
The ISC 1, I add all the odd VLANs and all the switches connecting that side use those VLAN's, and visa versa for the other side.&lt;BR /&gt;
&lt;BR /&gt;
The servers in the DC rooms reside locally only, and the VLANs added to the ISC between the DC switches only. DC VLANs needed across DC's I could use VXLAN.&lt;BR /&gt;
&lt;BR /&gt;
I'm not sure this is an elegant solution, but on the surface it seems to provide an answer.&lt;BR /&gt;
&lt;BR /&gt;
There might be a far better way of doing it, hence why I want to chuck it out there and see what you think.&lt;BR /&gt;
&lt;BR /&gt;
Many thanks in advance.&lt;BR /&gt;</description>
    <pubDate>Fri, 18 May 2018 18:55:00 GMT</pubDate>
    <dc:creator>Anonymous</dc:creator>
    <dc:date>2018-05-18T18:55:00Z</dc:date>
    <item>
      <title>Network Design Question</title>
      <link>https://community.extremenetworks.com/t5/network-architecture-design/network-design-question/m-p/14122#M1526</link>
      <description>Hi,&lt;BR /&gt;
&lt;BR /&gt;
Have a network design question. Basically we have 4 x X690's, two are in one DC and 2 are in another DC.&lt;BR /&gt;
&lt;BR /&gt;
What the requirement is would be to LAG within the DC and across the DC's. A fabric solution would be perfect for this, but unfortunately this isn't an option.&lt;BR /&gt;
&lt;BR /&gt;
One option would be to stack the X690's within the DC's and create an MLAG between the DC's, but I don't wont to using stacking in the core. Main reasons are that servers will be split within the DC so I wont to be able to upgrade the switches without resetting the whole stack, I would like to take advantage of both their control planes, and using MLAG makes use of all the switch CPU's evenly (fabric routing, slip VRRP).&lt;BR /&gt;
&lt;BR /&gt;
Anyway, this is what I've come up with and would like to validate with the community or see if anyone has some better or other suggestions:&lt;BR /&gt;
&lt;BR /&gt;
&lt;P class="fancybox-image"&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="a6f3168215754d4c89108d11fc1d61c6_RackMultipart20180518-77395-j4nr9p-Core_Design_inline.png"&gt;&lt;img src="https://community.extremenetworks.com/t5/image/serverpage/image-id/5376iBB123D1DB5E0AC6C/image-size/large?v=v2&amp;amp;px=999" role="button" title="a6f3168215754d4c89108d11fc1d61c6_RackMultipart20180518-77395-j4nr9p-Core_Design_inline.png" alt="a6f3168215754d4c89108d11fc1d61c6_RackMultipart20180518-77395-j4nr9p-Core_Design_inline.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;BR /&gt;
&lt;BR /&gt;
So here you have an ISC between each of the switches, and also OSPF P2P's between each switch. VRRP is split evenly between switches and fabric routing is being used.&lt;BR /&gt;
&lt;BR /&gt;
The ISC 1, I add all the odd VLANs and all the switches connecting that side use those VLAN's, and visa versa for the other side.&lt;BR /&gt;
&lt;BR /&gt;
The servers in the DC rooms reside locally only, and the VLANs added to the ISC between the DC switches only. DC VLANs needed across DC's I could use VXLAN.&lt;BR /&gt;
&lt;BR /&gt;
I'm not sure this is an elegant solution, but on the surface it seems to provide an answer.&lt;BR /&gt;
&lt;BR /&gt;
There might be a far better way of doing it, hence why I want to chuck it out there and see what you think.&lt;BR /&gt;
&lt;BR /&gt;
Many thanks in advance.&lt;BR /&gt;</description>
      <pubDate>Fri, 18 May 2018 18:55:00 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/network-architecture-design/network-design-question/m-p/14122#M1526</guid>
      <dc:creator>Anonymous</dc:creator>
      <dc:date>2018-05-18T18:55:00Z</dc:date>
    </item>
  </channel>
</rss>

