<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic vsp 7200 monitor-by-isid in ExtremeSwitching (VSP/Fabric Engine)</title>
    <link>https://community.extremenetworks.com/t5/extremeswitching-vsp-fabric/vsp-7200-monitor-by-isid/m-p/8604#M185</link>
    <description>Running version 6.1.x at the moment on 8 x VSP 7200.&lt;BR /&gt;
&lt;BR /&gt;
In a Nutshell:&lt;BR /&gt;
Im testing an IDS solution and configured a monitor-by-isid (with QoS = 3) to copy traffic across the SPB cloud to the IDS. We noticed a massive increase (1ms to 300ms) in ping latency across the core. The config was running what seems to be fine for a few days/weeks before this incident. Unplugging the IDS seems to restore normality. &lt;BR /&gt;
&lt;BR /&gt;
I have a suspicion that I might be oversubscribing the core or egress interface, since its ((4 x 10GbE to a single 10GbE egress) x 4) , Mirror Interfaces are normally around 20% utilized, but who knows!? )&lt;BR /&gt;
 I do not see any indication of loss of packets on the interface side&lt;BR /&gt;
&lt;BR /&gt;
  &lt;I&gt;show khi performance buffer-pool&lt;/I&gt;&lt;BR /&gt;
&lt;I&gt;show khi performance cpu&lt;/I&gt;&lt;BR /&gt;
&lt;I&gt;show khi performance memory&lt;/I&gt;&lt;BR /&gt;
&lt;I&gt;show khi performance process &lt;/I&gt;&lt;BR /&gt;
&lt;I&gt;show khi performance pthread &lt;/I&gt;&lt;BR /&gt;
&lt;I&gt;show khi performance slabinfo&lt;/I&gt;&lt;BR /&gt;
&lt;I&gt;show khi cpp port-statistics &lt;/I&gt;&lt;BR /&gt;
&lt;I&gt;show qos qosq-stats cpu-port&lt;/I&gt;&lt;BR /&gt;
&lt;BR /&gt;
All of these output(s) look normal - as far as I can see since the dates of Hi's does not correspond with the recent impact -  &lt;BR /&gt;
&lt;BR /&gt;
I thought it might be memory or buffers running low causing this but cannot see it any indication in the stats. &lt;BR /&gt;
&lt;BR /&gt;
&lt;B&gt;My Question:&lt;/B&gt;&lt;BR /&gt;
Is there some other parameters I can watch to see drops or performance impact on the mirroring/monitoring side?&lt;BR /&gt;
Any tips?&lt;BR /&gt;
&lt;BR /&gt;
Other than this, the SPB core is super stable&lt;BR /&gt;
&lt;BR /&gt;
Disclaimer - This is my first post. Apologies if I missed or got something wrong. &lt;BR /&gt;</description>
    <pubDate>Tue, 25 Sep 2018 22:45:00 GMT</pubDate>
    <dc:creator>Andre_Jordaan</dc:creator>
    <dc:date>2018-09-25T22:45:00Z</dc:date>
    <item>
      <title>vsp 7200 monitor-by-isid</title>
      <link>https://community.extremenetworks.com/t5/extremeswitching-vsp-fabric/vsp-7200-monitor-by-isid/m-p/8604#M185</link>
      <description>Running version 6.1.x at the moment on 8 x VSP 7200.&lt;BR /&gt;
&lt;BR /&gt;
In a Nutshell:&lt;BR /&gt;
Im testing an IDS solution and configured a monitor-by-isid (with QoS = 3) to copy traffic across the SPB cloud to the IDS. We noticed a massive increase (1ms to 300ms) in ping latency across the core. The config was running what seems to be fine for a few days/weeks before this incident. Unplugging the IDS seems to restore normality. &lt;BR /&gt;
&lt;BR /&gt;
I have a suspicion that I might be oversubscribing the core or egress interface, since its ((4 x 10GbE to a single 10GbE egress) x 4) , Mirror Interfaces are normally around 20% utilized, but who knows!? )&lt;BR /&gt;
 I do not see any indication of loss of packets on the interface side&lt;BR /&gt;
&lt;BR /&gt;
  &lt;I&gt;show khi performance buffer-pool&lt;/I&gt;&lt;BR /&gt;
&lt;I&gt;show khi performance cpu&lt;/I&gt;&lt;BR /&gt;
&lt;I&gt;show khi performance memory&lt;/I&gt;&lt;BR /&gt;
&lt;I&gt;show khi performance process &lt;/I&gt;&lt;BR /&gt;
&lt;I&gt;show khi performance pthread &lt;/I&gt;&lt;BR /&gt;
&lt;I&gt;show khi performance slabinfo&lt;/I&gt;&lt;BR /&gt;
&lt;I&gt;show khi cpp port-statistics &lt;/I&gt;&lt;BR /&gt;
&lt;I&gt;show qos qosq-stats cpu-port&lt;/I&gt;&lt;BR /&gt;
&lt;BR /&gt;
All of these output(s) look normal - as far as I can see since the dates of Hi's does not correspond with the recent impact -  &lt;BR /&gt;
&lt;BR /&gt;
I thought it might be memory or buffers running low causing this but cannot see it any indication in the stats. &lt;BR /&gt;
&lt;BR /&gt;
&lt;B&gt;My Question:&lt;/B&gt;&lt;BR /&gt;
Is there some other parameters I can watch to see drops or performance impact on the mirroring/monitoring side?&lt;BR /&gt;
Any tips?&lt;BR /&gt;
&lt;BR /&gt;
Other than this, the SPB core is super stable&lt;BR /&gt;
&lt;BR /&gt;
Disclaimer - This is my first post. Apologies if I missed or got something wrong. &lt;BR /&gt;</description>
      <pubDate>Tue, 25 Sep 2018 22:45:00 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/extremeswitching-vsp-fabric/vsp-7200-monitor-by-isid/m-p/8604#M185</guid>
      <dc:creator>Andre_Jordaan</dc:creator>
      <dc:date>2018-09-25T22:45:00Z</dc:date>
    </item>
    <item>
      <title>RE: vsp 7200 monitor-by-isid</title>
      <link>https://community.extremenetworks.com/t5/extremeswitching-vsp-fabric/vsp-7200-monitor-by-isid/m-p/8605#M186</link>
      <description>Hello,&lt;BR /&gt;
&lt;BR /&gt;
I do not have an experience with Fabric RSPAN but I remember we had serious stability issues with VSP 9000 when we were doing long term port mirroring to the hardware probe.  It took us days until we found out what is causing problems.  On the other hand I have never seen such a behaviour on VSP 8000.</description>
      <pubDate>Tue, 16 Oct 2018 21:17:00 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/extremeswitching-vsp-fabric/vsp-7200-monitor-by-isid/m-p/8605#M186</guid>
      <dc:creator>Martin_Sebek</dc:creator>
      <dc:date>2018-10-16T21:17:00Z</dc:date>
    </item>
    <item>
      <title>RE: vsp 7200 monitor-by-isid</title>
      <link>https://community.extremenetworks.com/t5/extremeswitching-vsp-fabric/vsp-7200-monitor-by-isid/m-p/8606#M187</link>
      <description>I have been running Fabric RSPAN now across the VSP7200 core for about a month without any problems.  Key was to set a low QoS so as not to interfere with production traffic.&lt;BR /&gt;
&lt;BR /&gt;
&lt;P class="fancybox-image"&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="b3a7a942af86407cb1d2135cdc10b5e9_RackMultipart20181022-122629-vsfnt1-Diagram_-_RSPAN_inline.jpg"&gt;&lt;img src="https://community.extremenetworks.com/t5/image/serverpage/image-id/5530i5251DA2321921DBD/image-size/large?v=v2&amp;amp;px=999" role="button" title="b3a7a942af86407cb1d2135cdc10b5e9_RackMultipart20181022-122629-vsfnt1-Diagram_-_RSPAN_inline.jpg" alt="b3a7a942af86407cb1d2135cdc10b5e9_RackMultipart20181022-122629-vsfnt1-Diagram_-_RSPAN_inline.jpg" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;BR /&gt;
&lt;BR /&gt;
Also, I split the monitor ports across two end switches (lower two in diagram), All these are 10GbE ports and set to mirror Rx and Tx.&lt;BR /&gt;
Understandably, when mirroring 8 x 10Gb ports to a single 10Gb port, one can expect some packet loss in the mirrored traffic. Overall with this enabled in production, I did notice a small sluggish-ness in general for the VMs and applications. &lt;BR /&gt;
I guess that the 40Gbps links between the ToR switches, and only two Backbone i-Sids, was congested - though I did not search for evidence in performance counters. &lt;BR /&gt;
Overall, very happy with what was achieved.&lt;BR /&gt;
&lt;BR /&gt;</description>
      <pubDate>Tue, 23 Oct 2018 00:29:00 GMT</pubDate>
      <guid>https://community.extremenetworks.com/t5/extremeswitching-vsp-fabric/vsp-7200-monitor-by-isid/m-p/8606#M187</guid>
      <dc:creator>Andre_Jordaan</dc:creator>
      <dc:date>2018-10-23T00:29:00Z</dc:date>
    </item>
  </channel>
</rss>

