Archiv der Kategorie: JNCIE-SP exam topics

TTL Handling

In some cases you want to hide your mpls backbone to your vpn customers and  prevent traceroutes (…and annoying questions….). There are some options to achieve this.

First of all tell your customers they don’t have to do this. Yes, and they will do it 😉

Think about it, you can configure a RE firewall filter and filter all traceroute traffic. Really? No, we don’t want additional host-bound traffic which we than drop by a firewall rule, thats stupid and wasting of ressources.

The best option will be the prevention of ttl handling during the mpls push/pop operations on the ingress or egress PE. You customer can decrease the ttl value as he like, but the value won’t get copied into the mpls header. The TTL value in the mpls header will be fixed on 255 on ingress PE.

There are two options in JunOS:

no-decrement-ttl – This is only valid for RSVP LSPs and signaled as OBJC_LABEL_REQUEST per LSP. This is not clearly stated on Juniper command description, but here. However this is a proprietary value. From my point of view, this wouldn’t be the best way to do this.

no-propagate-ttl – This is usual option for changing the ttl behavior. The ttl value of the ip packet won’t get copied into the ttl field of the mpls header on ingress and egress PE.

You can configure no-propagate-ttl on a global level:

protocols {
    mpls {
        no-propagate-ttl;
    }
}

Or per VRF:

routing-instance {
    your-vrf-name {
        no-vrf-propagate-ttl;               # or "vrf-propagate-ttl"
    }
}

VRF is more specific, so for e.g. you can disable ttl propagation globally and enable it on a single VRF if needed.

You can verify the ttl opration on every vpn prefixes in your VRF routing table:

admin@router> show route table <your-vrf-name> <vpn-prefix> extensive
<snip>
Label TTL action: no-prop-ttl
</snip>

You can see the options no-prop-ttl or prop-ttl.

Configuring BFD for ISIS and OSPF

In this post I show you how to configure BFD Liveness Detection for your IGP. Again I use the topology of my post IGP loop prevention:

Redistribution between ISIS/OSPF
Redistribution between ISIS/OSPF

BFD is a very lightweight (small header, simple states) protocol to detect forwarding problems between two links(single-hop) or nodes(multi-hop). It uses UDP packets to encapsulate BFD control packets (single-hop uses port 3784, multi-hop uses 4784) and BFD echo packets (3785). It is specified in RFC 5880 and 5881.

BFD operates after session establishment. That means, if your adjacency came up and the router started exchanging hello packets, then additionally BFD starts to send periodic packets.

There exists 2 modes of BFD:

  • Async mode – every side send periodic hello packets, if x-packets not received take the session down
  • Demand mode – only send hello packets if needed, if x-packets not received take the session down

Additionally BFD implements a Echo function. Echo function should loop back the received packets. Echo function was designed for slow systems, if only one side (Router) could implement BFD and the slow systems (Host or CE) only loop the packet and doesn’t have to look at it. As far as I know JunOS currently have no support for BFD Echo function, so we must not think about it.

Usually you will implement BFD async mode with subsecond intervals. You configure BFD on interface level (sometimes on session level e.g.: BGP, targeted-LDP…) the interval of the packets in milliseconds and a multiplier.

Example configuration of R2 with interval 100ms and multiplier 3:

admin@router> 
admin@router> show configuration logical-systems R2 protocols | display set
 set logical-systems R2 protocols isis export export-OSPF-to-ISIS
 set logical-systems R2 protocols isis interface fe-0/2/3.50 bfd-liveness-detection minimum-interval 100
 set logical-systems R2 protocols isis interface fe-0/2/3.50 bfd-liveness-detection multiplier 3
 set logical-systems R2 protocols isis interface fe-0/2/3.50 level 1 disable
 set logical-systems R2 protocols isis interface lo0.2 passive
 set logical-systems R2 protocols ospf export export-ISIS-to-OSPF
 set logical-systems R2 protocols ospf area 0.0.0.0 interface lo0.2 passive
 set logical-systems R2 protocols ospf area 0.0.0.0 interface fe-0/2/3.53 bfd-liveness-detection minimum-interval 100
 set logical-systems R2 protocols ospf area 0.0.0.0 interface fe-0/2/3.53 bfd-liveness-detection multiplier 3

You can check the BFD state with the command „show bfd session <detail|extensive>“:

admin@router> show bfd session logical-system R2 extensive
 Detect Transmit
 Address State Interface Time Interval Multiplier
 10.0.1.13 Up fe-0/2/3.53 0.300 0.100 3
 Client OSPF realm ospf-v2 Area 0.0.0.0, TX interval 0.100, RX interval 0.100
 Session up time 00:31:39
 Local diagnostic None, remote diagnostic None
 Remote state Up, version 1
 Logical system 5, routing table index 29
 Min async interval 0.100, min slow interval 1.000
 Adaptive async TX interval 0.100, RX interval 0.100
 Local min TX interval 0.100, minimum RX interval 0.100, multiplier 3
 Remote min TX interval 0.100, min RX interval 0.100, multiplier 3
 Local discriminator 5, remote discriminator 6
 Echo mode disabled/inactive

 Detect Transmit
 Address State Interface Time Interval Multiplier
 10.0.1.1 Up fe-0/2/3.50 0.300 0.100 3
 Client ISIS L2, TX interval 0.100, RX interval 0.100
 Session up time 00:31:39
 Local diagnostic None, remote diagnostic NbrSignal
 Remote state Up, version 1
 Logical system 5, routing table index 29
 Min async interval 0.100, min slow interval 1.000
 Adaptive async TX interval 0.100, RX interval 0.100
 Local min TX interval 0.100, minimum RX interval 0.100, multiplier 3
 Remote min TX interval 0.100, min RX interval 0.100, multiplier 3
 Local discriminator 8, remote discriminator 2
 Echo mode disabled/inactive

  2 sessions, 2 clients
 Cumulative transmit rate 20.0 pps, cumulative receive rate 20.0 pps
admin@router>

If you lost more BFD packets than the configured multiplier is set, the BFD session goes down and also takes immediately your IGP adjacency down. This lead to faster failure detection and mostly faster convergence.

Most BFD sessions for single-hop run distributed on your FPC/MPC with the help of PPM. That means you could use very low intervals (maybe 3x15ms). But there are also limitations in the amount of such short-interval sessions, so call JTAC and ask about limits if you plan to use a heavy amount of sessions. BFD multi-hop session currently only works from RE, so you should never use intervals faster than 300ms.

Hint 1: Don’t forget to allow BFD control packets in your firewall filter!

Hint 2: If you clear your BFD session or deactivate the configuration, BFD signals a „Admin Down“ flag in the hello packets. This could lead to different results in convergence tests. If you really want to proove BFD function, you should pull the cable or add a firewall filter.

I hope that gave you short overview about the BFD protocol and functions.

Ping to multiple IPs permanently with RPM

In your exam you should always check the connectivity to all your devices, after configuration changes. Just to make sure everything working as expected. You can use RPM Services to send continually pings to the routers in your network. If the ping fails you can check it by an operational mode command or see it in your /var/log/messages.

Example configuration:

services {
    rpm {
        probe R1 {
            test ping-R1 {
                probe-type icmp-ping;
                target address 10.0.1.1;
                test-interval 30;
                thresholds {
                    successive-loss 1;
                }
            }
        }
        probe R2 {
            test ping-R2 {
                probe-type icmp-ping;
                target address 10.0.1.2;
                test-interval 30;
                thresholds { 
                    successive-loss 1;
                }
            }
        }
    }
}

Now your router send every 30 seconds (test-interval) a icmp request (probe-type) to your destinations (target address).

To monitor the operation and even see failures you can use the following commands:

  • show services rpm history-results
  • show services rpm probe-results
  • look in /var/log/messages for PING_TEST_FAILED

 

 

Sending multicast traffic in JunOS

If you need to send traffic to a multicast group, to see if multicast working, you can easily use the ping tool. You have add the „bypass-routing“ option, to make sure the traffic get out without a lookup.

Here is an example:

ping 239.1.1.1 bypass-routing interface ge-0/0/1 count 10000

You can use the „interval 0.1“ option to increase the packets per second.

If you want a receiver you could add a static igmp join, but this only create the forwarding state. If you also need a reply to the multicast ping traffic you must add a listener, like this:

set protocols sap listen 239.1.1.1