From the last few days we were facing a problem with route reflector server which was not receiving proper routes from its clients. Think for a while if you were managing a service provider or immense enterprises network with multiple route reflectors in the domain and a day you come to know that one of your route reflector is not receiving the full route updates from its clients. What you will do that time? Might be look for an expert who is having good knowledge of route reflectors, bgp & mpls. But till the time network will be black holed. In this post I am giving the workaround for the problem which can be tried on imperative basics.
I will let you know a scenario in which you may face another type of problem. Assume you are having service provider domain with two route reflectors in the domain. Every PE is having peering with both the route reflectors. The route reflector you are using for both ipv4 as well as for vpnv4 routes. At any point of time both route reflectors will advertise the route advertisements to every PE. But PE will select the one out of them as best and another will be used if anything would go wrong with the first one.
In the figure I made a scenario which explicitly tells about the service provider network with MPLS in the core. Every PE is having a connection with both route reflectors. Client is VPN A and having three locations across the service provider cloud. On CE1VPNA location client is having internet for that it is imposing a default route towards the vrf and service provider is advertising that route in all the VPNA vrf. On PE1 if we verify the any route of remote location we will be getting two entries with next hop loopback of PE2 and both the routes will be advertised by both RRs and only one will be shown as best. In this case RR1 route is the first preferred route if RR1 goes down then RR2 route will be preferred. We assume at point of time PE2 advertises the routes to both RRs but RR2 is getting the proper updates. On RR2 only 10.1.2.0/24, 10.1.3.0/24 & 0.0.0.0/0 route is coming and it is advertising the route to all the PEs. At any point of time if RR1 goes down and the same time CE3 wants to reach CE2 lan. CE3 will forward the packet to PE1 and on PE1 the above three routes will be installed. But CE3 wants to reach 10.1.1.0/24 which actually is not available so it will go to the default route and traffic may be black holed any time.
Cisco commands which can be used for checking the vpnv4 route is cited below
Show ip bgp vpnv4 all summary
Show ip bgp vpnv4 rd x:y neighbor routes
Show ip bgp vpnv4 rd x:y neighbor advertise routes
On both RRs you can check the installed routes.
Workaround:-
This is nothing but the cisco bug. In this case you need to check IOS. A part from this you can clear the full bgp neighbourship or reload the router. After that it receives the full routes.
So if you see your traffic behaving abnormally then check your route reflectors updates first. The reason for writing this post because I faced the same problem and it is not a test lab scenario.