09-30-2020 01:20 PM
HI
In the process of building a small network across three DC locations. Each DC is using 2x VSP4900’s in each location as a RSMLT cluster. Intend on using DVR across each of the sites, predominantly based on the details provided in the Automated Campus Extreme Evaluated Design.
Questions are:
Warning: Vlan 3500 to 3998 will be reserved for internal use.
Warning: Vrf-scaling boot flag is enabled on DVR Node, make sure it is enabled
on all DVR Nodes in domain.
Warning: Please save the configuration and reboot the switch
for this configuration to take effect.
Do you know if that’s normal? This is potentially a problem pinching those VLAN ID’s that could already be in use i.e. doing a migration or certainly good to know prior to building the network.
Do you know why it needs them and why it reserves so many?
Many thanks,
Martin
Solved! Go to Solution.
09-30-2020 01:27 PM
Martin,
Per documentation:”On switches that support the vrf-scaling and spbm-config-mode boot configuration flags, if you enable these flags, the system also reserves VLAN IDs 3500 to 3998”
So, yes it is normal.
the why is because the system needs to reserve resources and the memory an CPU is limited.
So cannibalising the other resources is a way to achieve a higher number of VRFs (requiring a higher number of resources).
If the VLANs already exists elsewhere, you can map them in an I-SID and change the VLAN id in your core
Mig
09-30-2020 01:39 PM
Hi Mig,
Thank you for getting back so quickly.
Just to help me understand the last comment, say you have a virtual server on the network that has VLANs 3501 & 3502 tagged to it already.
I’m replacing the switches, using DVR and now those VLAN’s become part of the reserved pot.
Due to the VLAN ID itself being configured with the virtual environment I would need to preserve those VLAN tag ID’s.
So based on your comment I’m just trying to understand (apologies for my shortcomings in knowledge), but how does you concept get around that issue?
Also, I assume that you can configure all the nodes in each of the DC with the command ‘dvr controller 100’, then just do the following on each node for each VLAN interface you want to participate and it will work?
interface Vlan 401
dvr gw-ipv4 172.31.40.254
dvr enable
exit
Many thanks,
Martin
09-30-2020 01:37 PM
Mig is correct, and I just wanted to add that you can use DVR without leaf nodes, since end devices can be connected directly to the controllers.
09-30-2020 01:27 PM
Martin,
Per documentation:”On switches that support the vrf-scaling and spbm-config-mode boot configuration flags, if you enable these flags, the system also reserves VLAN IDs 3500 to 3998”
So, yes it is normal.
the why is because the system needs to reserve resources and the memory an CPU is limited.
So cannibalising the other resources is a way to achieve a higher number of VRFs (requiring a higher number of resources).
If the VLANs already exists elsewhere, you can map them in an I-SID and change the VLAN id in your core
Mig