Need help with QoS - Thanks!

  • 0
  • 1
  • Question
  • Updated 5 years ago
  • Answered
So, I'm having a hard time wrapping my head around this and what I need to do.

Basically, we have a bunch of remote sites that come in over what our vendor calls a "Metro-E" or Point-to-Point Circuit.

Where the ISP hands off to us, all the traffic comes in over 1 cable on 1 port.

Each of these remote sites has varying bandwidth back to the main site.

IE: Site A = 5, Site B =3, Site C = 10.

Each of these remote sites has phone/citrix traffic that needs to take priority when heading back this way. And vise versa.

I think I got the remote site bit figured out. I turned on diffserv exam; and assigned it a higher QP. For citrix, I used a ACL that looks for the port. Then, I adjusted the MIN/MAX BW on the port coming back to the main site to "reflect" their bandwidth.

My problem now, is, that from the main site.. how do the same? If I put more than 1 remote site in a QP; how will I give it an "accurate MIN/MAX BW" ?  Do I just put each remote site into its own QP? What happens when I run out of QPs?

If you need more info, please just ask.

Photo of Jeremy Homan

Jeremy Homan

  • 190 Points 100 badge 2x thumb

Posted 5 years ago

  • 0
  • 1
Photo of Paul Russo

Paul Russo, Alum

  • 9,694 Points 5k badge 2x thumb
Official Response
Hey Jeremy

QoS will be used as soon as you set the policy for what traffic goes into which queue.  What I mean by that is that you should see the traffic hit the queue even if nothing else is on the wire.  When you were told that it kicks in I think they were referring to the fact that you really don't see a benefit until the link is saturated.  Look at it as insurance.  For example if you travel on a road with a High Occupancy lane during the day when there is not a lot of traffic you can still use that lane but you probably wont see a difference as there is enough "bandwidth" in the other lanes so everyone is traveling at the same time.  If you travel that part of the highway at rush hour you will clearly see the difference where the HO lane will traveling faster then the other lanes that are backed up.  Same is true for QoS.

Not sure why you saw a difference in the traffic unless there were burst of traffic utilizing the bandwidth.  The switch is designed to read in 64 bytes of the packet at a time and do simultaneous look up of the QoS forwarding table and any ACLs.  This means that even before the packet fully enters the port everything has been decided and since we do it in 64bytes we wont suffer if one packet is bigger than another we can have consistent latency.

In regards to the port remember that each port has 8 queues and each queue on each port can have different settings.  This means that I can create qp6 on all ports but on the uplink give that queue 10% min and on the other ports give it 1%.

We assign traffic to a queue when it enters the ingress port.  So if port 1 is going to a phone and port 48 is the uplink I can have QoS set up to look at the traffic on port 1 (using .1p DSCP or ACL as an example) and assign it to qp6.  That packet is then marked for QP6 into the forwarding ASIC and as it goes into the egress port it is also placed into QP8.  In your example we may be on one port but the traffic is divided either using .1p or a IP subnet if it is routed.  That doesn't change the QoS.  All we are saying is as that port sends it to the next switch make sure that the traffic in QP6 gets 10% of that ports bandwidth, as an example.

So at the main site you can use any method to determine what type of traffic goes into which queue DSCP, .1p, ACL port, VLAN whatever.  The switch will forward that packet based on its forwarding tables.

Does that help or am I still missing the question?