TCAM distribution on x670

  • 0
  • 1
  • Question
  • Updated 4 years ago
Hi,
Could you please explain, HW Route Table stored in the same memory (TCAM?) as HW L3 Hash table, or it's different storage? I.e, accordingly to output below, can be 16352 IPv4 routes with 4096 IPv4 MCast entries at the same tame?

# sh iproute reserved-entries statistics 
             |-----In HW Route Table----|   |--In HW L3 Hash Table--|
             # Used Routes   # IPv4 Hosts   IPv4   IPv4   IPv6  IPv4
Slot  Type   IPv4   IPv6    Local Remote   Local  Rem.   Loc.  MCast
----  ------ ------  -----  ------ ------   -----  -----  ----  -----
1   X670-48x 9746      0     362      0       0      0     0   1173

Theoretical maximum for each resource type:
X670        16352   8176    8189  16384    8189   8192  4096  *4096
Photo of Vadim

Vadim

  • 110 Points 100 badge 2x thumb

Posted 4 years ago

  • 0
  • 1
Photo of Sumit Tokle

Sumit Tokle, Alum

  • 5,738 Points 5k badge 2x thumb
 The "Route Table" (LPM) is a separate table from the "L3 Hash Table".  The first line of that output attempts to group the columns into 2 different HW Tables (Route Table and L3 Hash Table).
Photo of Vadim

Vadim

  • 110 Points 100 badge 2x thumb
Dear Sumit,

Thank you for clarification!
The matter is that switch periodically complains on "IPv4 multicast entry not added. Hardware L3 Table full. (Logged at most once per hour.)". At that moment I see 10700 routes and 1400 multicast entries. Could you please advice how the issue could be solved?

There are zero "no room" counters in "debug hal show forwarding distributions system", and "debug hal show ipv4Mc gaddr x.x.x.x" shows only 70 table entries occupied from 185 available (table compression enabled). Output of "show igmp snooping cache" shows (as I can remember) 1380 entries from snooping, 0 MVR, 30 from PIM.
Photo of Sumit Tokle

Sumit Tokle, Alum

  • 5,738 Points 5k badge 2x thumb
The ‘IPv4 MCast’ column is displaying the number of IP multicast layer-3 entries stored in the ‘L3 table’.  Each of these entries represents a unique <sourceIP, groupIP, vlanId> key.  These entries can be displayed via ‘debug hal show ipv4mc’.  If there is anticipated contention with other entries with in the ‘L3 table’ and the IP multicast forwarding is layer-2 only (no PIM, PVLAN, IGMPv3, or MVR) then you can consider using ‘config forwarding ipmc lookup-key mac-vlan’ to instead utilize the L2 MAC FDB table to handle IGMP snooping forwarding.”
(Edited)
Photo of Sumit Tokle

Sumit Tokle, Alum

  • 5,738 Points 5k badge 2x thumb
The default behaviour of the switch you can verify with "show forwarding configuration" command output.
Photo of Vadim

Vadim

  • 110 Points 100 badge 2x thumb
PIM configured on switch, and a lot of multicast flows passes through switch, so 1400 entries looks reasonable, and we can't change lookup key. 'debug hal show ipv4mc' shows also group-table info, I mean 70 entries in that table:
Total IPMC Cache Entries              : 6
Total IPMC Caches with No Group Index : 0
IPMC Group Table Entries In-use       : 68
IPMC Group Table Entries Max          : 185
The issue is that switch complains that L3 Table Full, but I don't saw even 2000 entries in L3 Hash Table. Guess software upgrade required to fix the issue.

Thank you for support!
Photo of Sumit Tokle

Sumit Tokle, Alum

  • 5,738 Points 5k badge 2x thumb
This issue need more troubleshooting. What is the software running on switch? How frequently you are seeing the table full messages? Did you see any packet loss or some delay in multicast traffic to be sent out from switch? What is CPU utilization on switch? Is it in stack or standalone device? show mcast cache command, how many stream you are seeing?
Photo of Vadim

Vadim

  • 110 Points 100 badge 2x thumb
Just for information, once I decided to optimize network and configured this switch as PIM router for 10 vlans and 200 multicast streams (200 incoming, 600..800 outgoing in 10 vlans total), and got an issue with IGMP subscriptions - multicast traffic for several streams over in 210 seconds after subscription, and I was unable to subcribe to this group again until 'clear pim cache' issued - tired several times. 600 entries was in L3 hash table at that time. Note that forwarding over in 210 seconds - looks like PIM join/prune holdtime. Issue appeared on almost all  streams in range 239.0.0.62 - .99 (but not all, maybe 5 streams from that range were fine), other streams (239.0.0.1 - .62, 239.0.1.0 - ...) were not affected. I moved to old configuration with PIM router for 30 streams, which is operational now, and no incidents with PIM or subscriptions mentioned, only 'table full' messages appears. For now other switch, x670v-48x, responsible for PIM routing for 200 streams mentioned before, no issues reported for several months, software version is the same.

Typically switch send messages during peaks when say more than 1400 entries used, and sometimes when only 1200; we can get 1-3 messages during evening, or 1 message during morning or day, and then no messages for several days.
There are no packet loss - no complains from customers, no reports from monitoring system, error counters on interfaces are clean.
It's quite hard to monitor delays in multicast streams, definitely we have some (several ms once in 5 minutes, guess not on all streams at a time), but not sure are they are product of switch or we're getting it from upstream, never paid attention.
It's standalone switch; CPU load:
CPU Utilization Statistics - Monitored every 5 seconds
-----------------------------------------------------------------------

Process      5   10   30   1    5    30   1    Max           Total
            secs secs secs min  mins mins hour            User/System
            util util util util util util util util       CPU Usage
            (%)  (%)  (%)  (%)   (%)  (%)  (%)  (%)         (secs)
-----------------------------------------------------------------------

System       21.0 20.7 20.9 20.4 20.8 20.7 20.6 99.9   244.16   735087.05
aaa           0.5  0.2  0.2  0.2  0.2  0.2  0.2  1.9  1052.77    1260.28 
acl           0.0  0.0  0.0  0.1  0.1  0.1  0.1  2.0  1159.41    2237.66 
bfd           0.1  0.0  0.0  0.3  0.1  0.1  0.1 22.4   205.81     793.62 
bgp           0.7  0.5  0.2  0.3  0.3  0.4  0.3 89.2  4930.21    3300.20 
brm           0.0  0.0  0.0  0.0  0.0  0.0  0.0  2.4     5.99       4.75 
cfgmgr        0.5  0.2  0.0  2.6  0.6  0.3  0.3 42.5  3083.58    1457.60 
cli          16.5 21.6  8.6 16.8 14.4 14.9 14.9 94.2 128241.87  10053.42 
devmgr        0.0  0.0  0.0  0.0  0.0  0.0  0.0  4.8    55.38      21.61 
dirser        0.1  0.0  0.0  0.2  0.0  0.1  0.1  1.7   150.08     504.77 
dosprotect    0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.8     6.15       4.66 
dot1ag        0.0  0.0  0.0  0.0  0.0  0.0  0.0  3.0    72.48     116.34 
eaps          0.0  0.0  0.0  0.0  0.0  0.0  0.0  4.0    27.82      24.80 
edp           0.0  0.0  0.0  0.0  0.0  0.0  0.0  3.6    59.60      48.70 
elrp          0.0  0.0  0.0  0.0  0.0  0.0  0.0  3.8     5.70       4.69 
elsm          0.0  0.0  0.0  0.0  0.0  0.0  0.0  3.2    56.72      53.99 
ems           0.0  0.0  0.0  0.0  0.1  0.0  0.0 25.0   161.66     254.23 
epm           2.4  1.3  0.8  0.3  0.2  0.1  0.1  3.9  2638.95    1051.47 
esrp          0.0  0.0  0.0  0.0  0.1  0.1  0.1  2.8   226.39     415.51 
ethoam        0.0  0.0  0.0  0.0  0.0  0.0  0.0  3.4     6.96       6.18 
etmon         0.0  0.0  0.0  0.3  0.2  0.2  0.1  3.2   546.00     897.42 
exacl         0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0     0.00       0.00 
exdhcpsnoop   0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0     0.00       0.00 
exdos         0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0     0.00       0.00 
exfib         0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0     0.00       0.00 
exosipv6      0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0     0.00       0.00 
exosmc        0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0     0.00       0.00 
exosnvram     0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0     0.00       0.00 
exosq         0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0     0.00       0.00 
exsflow       0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0     0.00       0.00 
exsnoop       0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0     0.00       0.00 
exvlan        0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0     0.00       0.00 
fdb           4.8  4.2  3.4  3.4  3.2  3.2  3.3 45.9 91664.36   36557.11 
hal          11.8 14.1  5.6  5.9  5.7  5.8  5.8 87.0 23451.69   190491.85
hclag         0.0  0.0  0.0  0.0  0.0  0.0  0.0  1.8     5.38       4.82 
idMgr         0.0  0.0  0.0  0.0  0.0  0.0  0.0  4.6     8.18       5.85 
ipSecurity    0.0  0.0  0.0  0.0  0.0  0.0  0.0  5.0    24.14      11.04 
ipfix         0.0  0.0  0.0  0.0  0.0  0.0  0.0  4.0     7.59       5.33 
isis          0.0  0.0  0.0  0.2  0.0  0.0  0.0  2.6   202.50     166.22 
lacp          0.0  0.0  0.0  0.0  0.0  0.1  0.0  2.6   196.65     158.46 
lldp          0.0  0.0  0.0  0.0  0.0  0.0  0.0  1.6    92.09      92.48 
mcmgr         0.8  1.2  1.2  1.2  1.3  1.3  1.3 97.4 41280.01    9992.43 
mpls          0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0     0.00       0.00 
msdp          0.0  0.0  0.0  0.2  0.1  0.1  0.1  3.1   843.29     713.03 
msgsrv        0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.5    14.09      13.46 
netLogin      0.0  0.0  0.0  0.0  0.0  0.0  0.0  3.0     7.25       6.38 
netTools      0.0  0.1  0.0  0.2  0.0  0.0  0.0  7.8   113.63      56.47 
nettx         0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0     0.00       0.00 
nodemgr       0.0  0.1  0.0  0.2  0.1  0.1  0.1  0.8  2080.79    2239.11 
ospf          0.0  0.0  0.0  0.3  0.3  0.2  0.2  1.9  1151.73    1510.82 
ospfv3        0.4  0.3  0.1  0.0  0.0  0.0  0.0  3.8   258.20     286.69 
pim           0.0  0.2  0.1  0.2  0.2  0.2  0.1  6.4  2675.13    1331.17 
poe           0.0  0.0  0.0  0.0  0.0  0.0  0.0  1.2     5.10       4.42 
polMgr        0.0  0.0  0.0  0.0  0.0  0.0  0.0  8.2     9.62       7.64 
rip           0.6  0.3  0.1  0.1  0.1  0.1  0.1  3.2   637.94    1267.49 
ripng         0.0  0.0  0.0  0.0  0.1  0.1  0.1  1.3   503.20     629.63 
rtmgr         0.0  0.0  0.0  0.1  0.0  0.1  0.1 34.8   547.04     287.41 
snmpMaster    0.0  0.0  0.0  1.6  0.4  0.5  0.5 16.4  3864.42    1962.54 
snmpSubagent  0.0  0.0  0.0  4.0  0.8  0.8  0.8 46.6  7277.51    1308.93 
stp           0.0  0.0  0.0  0.0  0.0  0.0  0.0  3.4    14.07       9.42 
synce         0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0     0.00       0.00 
telnetd       0.0  0.3  0.2  0.2  0.2  0.2  0.2  5.0   382.15    1843.87 
tftpd         0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.4     7.16       6.20 
thttpd        0.0  0.0  0.0  0.0  0.0  0.0  0.0  4.3    37.70      10.41 
upm           0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.6     4.99       4.08 
vlan          0.0  0.0  0.0  1.6  0.3  0.3  0.4 31.7  4647.60     776.21 
vmt           0.0  0.0  0.0  0.0  0.0  0.0  0.0  3.0     7.69       5.70 
vrrp          0.0  0.0  0.0  0.0  0.0  0.0  0.0  3.6     8.17       5.44 
vsm           0.0  0.0  0.0  0.0  0.0  0.0  0.0  3.5     9.49       9.07 
xmlc          0.0  0.0  0.0  0.0  0.0  0.0  0.0  2.6     6.28       4.80 
xmld          0.0  0.0  0.0  0.0  0.0  0.0  0.0 32.0    10.13       8.32
Show mcast output:
<output omitted>
Multicast cache distribution:
  1140 entries from Snooping           0 entries from MVR          32 entries from PIM

Total Cache Entries: 1172
Software version is 12.6.2.10 patch1-12, Core license.

Your participation in troubleshooting is highly appreciated.
Photo of Sumit Tokle

Sumit Tokle, Alum

  • 5,738 Points 5k badge 2x thumb
There could be mismatch entries in the hardware and software table which causes this log messages to appear. However, it won't drop the traffic. It is being sent via slow patch. You can just delete those event from the DefaultFileter if you want to get rid of those log messages.

However, it would better if you can upgrade the switches to EXOS 15.3.1 patch 1-36 and check whether or not issue will get resolved.
Photo of Vadim

Vadim

  • 110 Points 100 badge 2x thumb
We have no complains on traffic loss or failed IGMP subscriptions when message appears (BTW, no messages during last week), but there are a lot of vlan and customers so we can't be sure.Guess we will consider software update.
Thank you very much for your support!