We have seen several customers experience different issues involving mDNS traffic causing congestion or not be handled as expected. The testing we have done also has created more questions than answers.
It seems all 224.0.0.x traffic goes to the CPU by default which causes issues on x460-G1 switches. We currently have one site where we are blocking 188.8.131.52/32 LLMNR, with a deny, and 184.108.40.206/32 mDNS with deny-cpu, we are also blocking 220.127.116.11/32 SSDP, with a deny.
This has greatly improved switch and Network performance but I have not yet received verification that Apple TVs and Chromebooks are operating as expected.
Some testing we did our lab with code 18.104.22.168-patch1-3 and 22.214.171.124-patch1-3 seems to indicate that blocking mDNS with deny-cpu causes the switch to also stop forwarding mDNS from the data plane.
Is this the expected behavior?
What is the recommended treatment of these traffic types?
We see more and more of this on school networks and are wondering if we should create ACLs to block this traffic from the CPU as a default configuration even on G2 switches.
Thoughts or experiences?
We have also seen a switch not relay mDNS packets between Apple TVs and an Aruba controller. Through packet captures we determined everything was making it to the switch and igmp snooping was configured and the appropriate ports were in the mcast cache. We turned off igmp snooping for the VLAN and suddenly the packets began to flow.
Any thoughts or similar experiences?
We found some documentation that seemed to indicate 224.0.0.x was except from igmp snooping.
Is that correct?