I've just been troubleshooting a rather odd network latency problem.
Scenario is that we have two X480s doing BGP with some Internet transit providers and peers. All fairly standard ISP setup - the transit providers give a default route, and the peers provide some more specifics - due to lack of room in the X480 for a full global IP routing table.
We were seeing latency jumps of over 100ms on all packets going through one of these switches earlier (which happened to have all of the BGP sessions for the peer routes on it). This increase in latency went away when these peers were shut down and the peer routes removed from the routing table. No processes were maxing out the CPU at the top of top though, so it didn't look like a classic slow-path issue.
The X480 should be able to cope with 256K IPv4 routes but only 8K IPv6 routes.
The BGP feeds were providing, just before I closed the peers, a total of around 81000 IPv4 routes and 26000 IPv6 routes.
With the extra peer routes removed, the total size of the routing table on the switch dropped to around 450 routes and everything was (and still is) happy again.
The switch log was complaining about:
01/26/2016 17:35:23.05 [i] IPv6 route not added to hardware. Hardware LPM Table full.
which makes some sense given that 26K > 8K.
So, to the key part of the question. Assuming IPv6 traffic was very low (it was), should the overflow of the IPv6 hardware table have affected IPv4 forwarding? It very much seemed to have done so in this instance; we were testing with IPv4 traffic to devices on the network and all suffered increased latency.
And as a followup, is there a way to re-carve this on an X480 to give more IPv6 hardware entries?