Header Only - DO NOT REMOVE - Extreme Networks

Blackdiamond lack of memory with BGP


I am experiencing some issues with Blackdiamond and BGP.
I have 2 iBGP and 2 eBGP peers for IPV4 and the same for IPv6 (8 BGP peers total). From eBGP peers I receive full routing (> 600k IPv4 routes).
With this setup memory usage is very high and sometimes the switch reboots due lack of memory.
Is there any configuration to reduce memory usage, specially to rtmgr process? Or disable other processes? Or any other ideas possibly involving filtering received routes and reduce memory consumption.

Thanks in advance.


I've made the following config on the switch:

configure forwarding external-tables l3-only ipv4-and-ipv6
configure iproute reserved-entries maximum slot 2

sh memory
System Memory Information
MSM-A Total DRAM (KB): 1048576
MSM-A System (KB): 40152
MSM-A User (KB): 918992
MSM-A Free (KB): 89432

Memory Utilization Statistics

Card Slot Process Name Memory (KB)
MSM-A A aaa 4160
MSM-A A acl 3064
MSM-A A bfd 2040
MSM-A A bgp 372484
MSM-A A brm 1876
MSM-A A cfgmgr 3316
MSM-A A cli 18812
MSM-A A devmgr 2060
MSM-A A dirser 1456
MSM-A A dosprotect 1552
MSM-A A dot1ag 2580
MSM-A A eaps 2416
MSM-A A edp 2108
MSM-A A elrp 2072
MSM-A A elsm 2032
MSM-A A ems 3780
MSM-A A epm 2604
MSM-A A erps 2524
MSM-A A esrp 2316
MSM-A A etmon 5116
MSM-A A exacl 0
MSM-A A exdhcpsnoop 0
MSM-A A exdos 0
MSM-A A exfib 0
MSM-A A exfipSnoop 0
MSM-A A exosmc 0
MSM-A A exosq 0
MSM-A A exsflow 0
MSM-A A exsnoop 0
MSM-A A exsshd 1876
MSM-A A exvlan 0
MSM-A A fcoe 2160
MSM-A A fdb 3336
MSM-A A hal 54828
MSM-A A hclag 2076
MSM-A A idMgr 4432
MSM-A A ipSecurity 2260
MSM-A A ipfix 2116
MSM-A A isis 2644
MSM-A A lacp 2484
MSM-A A lldp 2248
MSM-A A mcmgr 3404
MSM-A A mpls 0
MSM-A A mrp 2300
MSM-A A msdp 2208
MSM-A A netLogin 2448
MSM-A A netTools 5796
MSM-A A nettx 0
MSM-A A nodemgr 1624
MSM-A A ntp 2040
MSM-A A ospf 2908
MSM-A A ospf-3 3240
MSM-A A ospfv3 2876
MSM-A A ospfv3-3 2952
MSM-A A pim 3164
MSM-A A poe 2156
MSM-A A polMgr 1928
MSM-A A pwmib 1592
MSM-A A rip 2524
MSM-A A r.png 2228
MSM-A A rtmgr 265872
MSM-A A snmpMaster 4260
MSM-A A snmpSubagent 4912
MSM-A A stp 2596
MSM-A A techSupport 2164
MSM-A A telnetd 2256
MSM-A A tftpd 1444
MSM-A A thttpd 2624
MSM-A A trill 0
MSM-A A twamp 1680
MSM-A A upm 2216
MSM-A A vlan 3700
MSM-A A vmt 2712
MSM-A A vrrp 2248
MSM-A A vsm 2316
MSM-A A xmlc 2336
MSM-A A xmld 4700

2 replies

Userlevel 5
One way would be:
configure neighbor maximum-prefix 400000 threshold 90

(only accept 400K routes from neighbor, make a warning log entry when we get to 90%)

Another possibility is to only accept routes of a certain length - that way you don't get a million /24 entries clogging up your routes. For instance a policy "BGP-in-filter"
entry DenySmall4 {
if {
nlri any/16;
} then {

entry PermitRest {
if {
} then {
can be applied via
configure bgp neighbor route-policy in BGP-in-filter

and should drop anything from a /16 to /24 (and you'd still receive your default-route

You can use both, policy and "cutoff", btw.

After that, I'd suggest to still keep a close eye on CPU utilization (bgp process) when a neighbor dies/comes back.


P.S.: Oh, I'm assuming you're already using route-compression 😉
Hi Frank,

Thanks for reply.
Yes, I am using route-compression, your assumption is correct!

I will do some filtering as suggested and see what happens. I was thinking in something like this to solve my problem.