Skip to main content (Press Enter).
Skip auxiliary navigation (Press Enter).
Terms and Conditions
Advance with us!
Skip main navigation (Press Enter).
Network Management & Authentication
Switching & Routing
Training, Documentation, & General Discussions
Switching & Routing
Back to discussions
sort by most recent
sort by thread
mDNS traffic flow
We have seen several customers experience different issues involving mDNS traffic causing congestion...
Hello David, If MDNS and LLMNR are needed you can try to utilize the following command: "conf...
mDNS traffic flow
Posted 01-29-2018 14:54
We have seen several customers experience different issues involving mDNS traffic causing congestion or not be handled as expected. The testing we have done also has created more questions than answers.
It seems all 224.0.0.x traffic goes to the CPU by default which causes issues on x460-G1 switches. We currently have one site where we are blocking 126.96.36.199/32 LLMNR, with a deny, and 188.8.131.52/32 mDNS with deny-cpu, we are also blocking 184.108.40.206/32 SSDP, with a deny.
This has greatly improved switch and Network performance but I have not yet received verification that Apple TVs and Chromebooks are operating as expected.
Some testing we did our lab with code 220.127.116.11-patch1-3 and 18.104.22.168-patch1-3 seems to indicate that blocking mDNS with deny-cpu causes the switch to also stop forwarding mDNS from the data plane.
Is this the expected behavior?
What is the recommended treatment of these traffic types?
We see more and more of this on school networks and are wondering if we should create ACLs to block this traffic from the CPU as a default configuration even on G2 switches.
Thoughts or experiences?
We have also seen a switch not relay mDNS packets between Apple TVs and an Aruba controller. Through packet captures we determined everything was making it to the switch and igmp snooping was configured and the appropriate ports were in the mcast cache. We turned off igmp snooping for the VLAN and suddenly the packets began to flow.
Any thoughts or similar experiences?
We found some documentation that seemed to indicate 224.0.0.x was except from igmp snooping.
Is that correct?
RE: mDNS traffic flow
Posted 01-29-2018 19:53
If MDNS and LLMNR are needed you can try to utilize the following command:
"configure forwarding ipmc local-network-range fast-path"
You will need to keep the following in mind when you implement this:
Fast-path forwarding dictates that packets traversing the switch do not
require processing by the CPU. Fast path packets are forwarded entirely by
ASICs and are sent at wire speed rate. This consumes additional system ACL
per-port or per-VLAN, depending on configure igmp snooping
filters [per-port | per-vlan] selections.
This means that if you are utilizing any other protocols that fall within the local network range(OSPF and VRRP) they will not be processed by the CPU.
It will probably require a reboot and since you have a lab setup shouldn't be that hard to see if it resolves the issue you are seeing.
Everything else you are talking about seems correct. The 224.0.0.x traffic can affect a network based on a couple factors. The design, size of the subnet or even the size of the stack because the switch takes in this traffic and replicates it out all ports in that VLAN. I have seen cases where this traffic can be sent out at an alarming rate from the PC side. I am not completely sure if this has been resolved but here is one post that I found while searching for it:
The G2 switches may be better equipped to handle the traffic. They do have a stronger CPU and may not show the same symptoms.
The 224.0.0.x subnet is considered local multicast and most of it will be exempt from IGMP Snooping. There are some protocol traffic like OSPF and VRRP that will not be exempt from IGMP Snooping.
Hope I answered all your questions.
New Best Answer
This thread already has a best answer. Would you like to mark this message as the new best answer?
Privacy & Terms
Copyright 2020. All rights reserved.
Site powered by
Site Design by
Powered by Higher Logic