Need help with QoS - Thanks!


So, I'm having a hard time wrapping my head around this and what I need to do.

Basically, we have a bunch of remote sites that come in over what our vendor calls a "Metro-E" or Point-to-Point Circuit.

Where the ISP hands off to us, all the traffic comes in over 1 cable on 1 port.

Each of these remote sites has varying bandwidth back to the main site.

IE: Site A = 5, Site B =3, Site C = 10.

Each of these remote sites has phone/citrix traffic that needs to take priority when heading back this way. And vise versa.

I think I got the remote site bit figured out. I turned on diffserv exam; and assigned it a higher QP. For citrix, I used a ACL that looks for the port. Then, I adjusted the MIN/MAX BW on the port coming back to the main site to "reflect" their bandwidth.

My problem now, is, that from the main site.. how do the same? If I put more than 1 remote site in a QP; how will I give it an "accurate MIN/MAX BW" ? Do I just put each remote site into its own QP? What happens when I run out of QPs?

If you need more info, please just ask.

Thanks,

3 replies

Userlevel 6
Hey Jeremy

Great question. One thing I do want to verify is that you can see the traffic in the correct queue using the show port qosm command. This shows the egress traffic so make sure you are looking at the right port for example the uplink at the remote site shows traffic to the core the core port shows traffic out to the remotes.

Also DSCP has the lowest precedence in QoS. The reason this is important is that .1p has higher precedence and is enabled by default so if those VLANs are tagged .1p will trigger before DSCP. Just something to remember if you are troubleshooting. There are different ways around this but will save that if you need it.

To answer your question what you could do it either way if you are worried that you will not have enough queues for each site then I would recommend combining them then set your min to the combined for each site. For example lets say VOIP needs 10K per site and you have 5 sites then make that queues min 50K. In reality it can be something lower then the combined as you will probably never burst to each site. In Service Provider networks they do this all of the time as they have hundreds of VLANs over links. they just go with the normal traffic patterns.

I hope that helps and doesn't confuse you please let me know if you need more clarification.

Thanks
P
Question then,

I was told QoS does not kick in until the "link" gets saturated.

For example, it was not until I put in this command;

configure qosprofile QP1 minbw 0 maxbw 3 ports 48
configure qosprofile QP6 minbw 0 maxbw 3 ports 48

Before I did this, call quality coming from my remote site to my main site would tear and break up all the time. After I change the maxbw from 100 to 3; thats when the voice quality cleared up so much.

So my question is,

For the remote sites, its easy to tell the switch how much bandwidth a QP is allowed. The link back to the main site is only "X".

How do I differentiate that at the main site? Since its all going out 1 port? Or don't I?

Maybe Im looking at QoS wrong?

Thanks again,
Userlevel 6
Hey Jeremy

QoS will be used as soon as you set the policy for what traffic goes into which queue. What I mean by that is that you should see the traffic hit the queue even if nothing else is on the wire. When you were told that it kicks in I think they were referring to the fact that you really don't see a benefit until the link is saturated. Look at it as insurance. For example if you travel on a road with a High Occupancy lane during the day when there is not a lot of traffic you can still use that lane but you probably wont see a difference as there is enough "bandwidth" in the other lanes so everyone is traveling at the same time. If you travel that part of the highway at rush hour you will clearly see the difference where the HO lane will traveling faster then the other lanes that are backed up. Same is true for QoS.

Not sure why you saw a difference in the traffic unless there were burst of traffic utilizing the bandwidth. The switch is designed to read in 64 bytes of the packet at a time and do simultaneous look up of the QoS forwarding table and any ACLs. This means that even before the packet fully enters the port everything has been decided and since we do it in 64bytes we wont suffer if one packet is bigger than another we can have consistent latency.

In regards to the port remember that each port has 8 queues and each queue on each port can have different settings. This means that I can create qp6 on all ports but on the uplink give that queue 10% min and on the other ports give it 1%.

We assign traffic to a queue when it enters the ingress port. So if port 1 is going to a phone and port 48 is the uplink I can have QoS set up to look at the traffic on port 1 (using .1p DSCP or ACL as an example) and assign it to qp6. That packet is then marked for QP6 into the forwarding ASIC and as it goes into the egress port it is also placed into QP8. In your example we may be on one port but the traffic is divided either using .1p or a IP subnet if it is routed. That doesn't change the QoS. All we are saying is as that port sends it to the next switch make sure that the traffic in QP6 gets 10% of that ports bandwidth, as an example.

So at the main site you can use any method to determine what type of traffic goes into which queue DSCP, .1p, ACL port, VLAN whatever. The switch will forward that packet based on its forwarding tables.

Does that help or am I still missing the question?

Thanks
P

Reply