Multicast Lab IOS15: PIM Modes Configurations

I have created an IOS15 Multicast Lab in GNS3. I have used the different PIM modes to investigate and understand how they work in my lab environment. There are some best practices when using multicast in production.

The lab is quite simple. We have a multicast server of 172.16.2.200 and a single multicast receiver of 172.16.3.201. Both are Windows servers running a multicast tester called Singleware Multicast Tester

Multicast requirements

In order to have a working multicast network we need the following requirements. I’d say the major point for a lab is a fully converged unicast routing network. For a production environment enabling multicast and enabling PIM on the correct interfaces could easily be missed.

  • Fully converged unicast routing
  • Multicast routing enabled
  • Enable PIM on relevant interfaces
  • RP configuration (sparse mode)
  • Join multicast groups (testing)

Multicast Process

The multicast process is a little complex and I don’t want to explain what is already very well explained in this video
I will highlight the process that can be used for confirming and troubleshooting Multicast.

  • The multicast sender is connected to R1 and has an IP of 172.16.2.200.
  • The multicast receiver is connected to R3 and has an IP of 172.16.3.201
  • The multicast stream is 239.0.1.2

PIM Dense Mode

PIM Dense Mode is not a recommended method for multicast traffic due to the flooding and pruning approach that it takes to forwarding multicast traffic. PIM Dense mode forces traffic to all multicast routers, meaning that we could end up with a lot of traffic flooding the network depending on the types of multicast traffic we have and how many multicast groups we have.

For me in this lab dense mode is not a problem. In a production network it is not recommended as it’s quite noisy. Dense mode is easy to setup though.

No Multicast Traffic State

We have the default Cisco group of 224.0.1.40 which is for Cisco’s Rendezvous Point discovery.
In R1 and R3 we have another group which us 239.255.255.250. I understand this to be used for Windows UPnP.

Multicast Server Transmitting

I have started the mcast_server to start send traffic. The group is 239.0.1.2. No clients are wanting to yet receive this traffic.
I am using Singleware Multicast Tester to generate the multicast traffic for this group.

We have two extra entries in the multicast routing table a *,G and S,G entry for the group 239.0.1.2
Below is the R1 mroute table

We can see the PIM dense mode flood and prune working here. PIM Dense mode will flood the traffic for to the routers down the multicast tree every three minutes, R2 and R3 in this case.
R2 and R3 don’t yet have any clients, and therefore no need for this multicast traffic.
Every three minutes R1 will flood this multicast traffic out and we will see a prune message from R2 and R3 immediately.

The timers on the end here show the time since last flood (1 second) and timer to countdown to the next flood (2 minutes 58 seconds)

We also see the multicast traffic in Wireshark and the prune message being sent back immediately from R2/R3 to R1

Client Receiving Multicast Traffic

Now that the multicast traffic is transmitting we can introduce the client and see how this changes things.
R3 is now transmitting the traffic out of an outgoing interface. This is the interface to the mcast_client PC.
The mcast_client PC is using the same Singleware Multicast Tester software to receive the traffic.

The mcast_client PC will send an IGMP join message upto the router R3 when the multicast client is run.

When the mcast_client doesn’t want to receive anymore multicast traffic, then it will send an IGMP leave message

Client Joining when no Multicast Transmitting

When there is no multicast traffic being sent from the mcast_server, the mcast_client can still join the group, however the R3 multicast routing table won’t have any entry for the multicast group 239.0.1.2 that is being requested.

Router PIM Dense Configurations

The configuration is basic and similar for all of the routers. My main points came from multicast requirements list.

R1

R2

R3

PIM Sparse Mode

Sparse mode is different to the flooding and pruning that dense mode offers. In summary sparse mode will request the multicast traffic in a more efficient method than the flood prune of dense mode. Sparse mode is generally recommended.

With Sparse mode you need a rendezvous point (RP). This may be configured automatically or statically. Whichever method the RP must be known by all of the routers participating in multicast.

The configuration follows a similar configuration as in dense mode. The major differences are the interface commands contain “sparse” and not “dense”. And there is rendezvous to be configured.
The below routers outputs show the Ver/Mode column to be D for dense or S for sparse.

I will use a static RP for this part of the lab. I will make it R2. Not because it is the best, but to see how the shared tree and source trees work.
In brief the shared tree is multicast via the RP always, which is not always the best path. And the source tree is the client router, R3, knowing that the best path to the multicast server, is directly to R1 and no via R2.

The RP is interface lo10 on R2 using IP 10.1.1.1. This has also been added to EIGRP. All routers can ping 10.1.1.1

No Multicast Traffic State

This is the same as how PIM dense mode looked. Has the same default groups. Only difference is there is no flood/prune happening and the output references sparse mode.

Multicast Server Transmitting

Once I start transmitting traffic from mcast_server we start to see differences. There are no clients that are requesting the traffic just yet.
R1 send out a PIM register message to the multicast group IP of 239.0.1.2. This message only goes to the RP or 10.1.1.1 and as there are no clients to receive the traffic the RP sends back a PIM stop message to stop the multicast traffic flowing as it is no required.

R1 and R2 now both know about the multicast traffic though. It is only R3 that doesn’t have any changes to the multicast routing table.

Client Receiving Multicast Traffic

In this process a lot happens very quickly. Basically three things are happening;
1. mcast_client requests to receive the multicast traffic from 239.0.1.2
2. The traffic is sent to R3 and onto mcast_client via R2 – shared tree
3. The traffic is found to have a more efficient path directly from R1 to R3 bypassing R2 – source tree

I’ll go into details with Wireshark captures below. Note that the routers are using the all PIM IP of 224.0.0.13 to talk to one another.
First we have mcast_client requesting to join the group 239.0.1.2.


Second we have R3 joining the multicast group for the client. This is sent to the RP R2.


Third we have R2 sending the PIM join message to R1 to get the multicast traffic from the source.


Now the traffic flows from the mcast_server to the mcast_client via R2. However this is short lived as once R3 learns the real source IP is 172.16.2.200 it looks this up in the routing table and can see a better path. This is the point there multicast switches from the shared tree via R2 to the source tree which is directly to R1.

In the diagram the shared tree is red and source tree blue


Fourth and fifth step, R3 will prune the multicast traffic upto R2, which is ther shared tree and join the better multicast traffic link to R1 which is the source tree.


Lastly R2 will prune the multicast traffic upto R1. R2 now has no clients and therefore no need for any of the multicast traffic until it is asked again.

Looking at the multicast routint tables. We see that traffic is flowing from R1 to R3 directly. R2 has the *,G outgoing only and S,G incoming only. So R2 is in a state of knowing there is multicast traffic for this group, but not needing to be part of it.

Router PIM Sparse Configurations

R1

R2

R3

Troubleshooting Commands

This is not a complete list of commands to run. It is a good start to understand a topology and being troubleshooting.

Leave a Comment

Your email address will not be published. Required fields are marked *