mDNS traffic flow

  • 0
  • 1
  • Question
  • Updated 6 months ago
  • Answered
We have seen several customers experience different issues involving mDNS traffic causing congestion or not be handled as expected. The testing we have done also has created more questions than answers.

It seems all 224.0.0.x traffic goes to the CPU by default which causes issues on x460-G1 switches. We currently have one site where we are blocking 224.0.0.252/32 LLMNR, with a deny, and 224.0.0.251/32 mDNS with deny-cpu, we are also blocking 239.255.255.250/32 SSDP, with a deny. 

This has greatly improved switch and Network performance but I have not yet received verification that Apple TVs and Chromebooks are operating as expected.

Some testing we did our lab with code 16.2.4.5-patch1-3  and 16.2.3.5-patch1-3 seems to indicate that blocking mDNS with deny-cpu causes the switch to also stop forwarding mDNS from the data plane.
Is this the expected behavior?
What is the recommended treatment of these traffic types?

We see more and more of this on school networks and are wondering if we should create ACLs to block this traffic from the CPU as a default configuration even on G2 switches. 

Thoughts or experiences?

We have also seen a switch not relay mDNS packets between Apple TVs and an Aruba controller. Through packet captures we determined everything was making it to the switch and igmp snooping was configured and the appropriate ports were in the mcast cache. We turned off igmp snooping for the VLAN and suddenly the packets began to flow.

Any thoughts or similar experiences?
We found some documentation that seemed to indicate 224.0.0.x was except from igmp snooping.
Is that correct?
Photo of David Coglianese

David Coglianese, Embassador

  • 6,114 Points 5k badge 2x thumb

Posted 6 months ago

  • 0
  • 1
Photo of Patrick Voss

Patrick Voss, Alum

  • 11,574 Points 10k badge 2x thumb
Hello David,

If MDNS and LLMNR are needed you can try to utilize the following command:

"configure forwarding ipmc local-network-range fast-path"

You will need to keep the following in mind when you implement this:

***
Fast-path forwarding dictates that packets traversing the switch do not
require processing by the CPU. Fast path packets are forwarded entirely by
ASICs and are sent at wire speed rate. This consumes additional system ACL
per-port or per-VLAN, depending on configure igmp snooping
filters [per-port | per-vlan] selections.
***

This means that if you are utilizing any other protocols that fall within the local network range(OSPF and VRRP) they will not be processed by the CPU.

It will probably require a reboot and since you have a lab setup shouldn't be that hard to see if it resolves the issue you are seeing. 

Everything else you are talking about seems correct. The 224.0.0.x traffic can affect a network based on a couple factors. The design, size of the subnet or even the size of the stack because the switch takes in this traffic and replicates it out all ports in that VLAN. I have seen cases where this traffic can be sent out at an alarming rate from the PC side. I am not completely sure if this has been resolved but here is one post that I found while searching for it:

https://social.technet.microsoft.com/Forums/en-US/b334e797-ef80-4525-b74a-b4830420a14e/windows-10-sp...

The G2 switches may be better equipped to handle the traffic. They do have a stronger CPU and may not show the same symptoms.

The 224.0.0.x subnet is considered local multicast and most of it will be exempt from IGMP Snooping. There are some protocol traffic like OSPF and VRRP that will not be exempt from IGMP Snooping.

Hope I answered all your questions.