Thursday, 31 August 2017

OSPF ABR Rules

CCIE Routing and Switching Official Guide : Page 505
Let’s restate once again the rules regarding originating and processing type 3 LSAs on ABRs. First, when an ABR originates type 3 LSAs on behalf of known routes, it translates only intra-area routes from a nonbackbone area into type 3 LSAs and floods them into the backbone, and it translates both intra-area and inter-area routes from the backbone area into type 3 LSAs and floods them into nonbackbone areas. Second, when an ABR runs the SPF algorithm, it ignores all type 3 LSAs received over nonbackbone areas. 


For example, without this second rule, in the internetwork of Figure 9-10, router ABR2 would calculate a cost 3 path to subnet 1: from ABR2 to ABR1 inside area 1 and then from ABR1 to ABR3 in area 0. ABR2 would also calculate a cost 101 path to subnet 1, going from ABR2 through area 0 to ABR3. Clearly, the first of these two paths, with cost 3, is the least-cost path. However, ABRs use this additional loop-prevention rule, mean- ing that ABR2 ignores the type 3 LSA advertised by ABR1 for subnet 1. This behavior prevents ABR2 from choosing the path through ABR1, so in actual practice, ABR2 would find only one possible path to subnet 1: the path directly from ABR2 to ABR3.
Cost 3 path Area 0
Area 1
Figure 9-10
Cost 101 path
Area 2
Subnet 1
Cost 1
Cost 1
Cost 1
Cost 1
Cost 100
Cost 1
Effect of ABR2 Ignoring Path to Subnet 1 Through Area 1
It is important to notice that the link between ABR1 and ABR2 is squarely inside non- backbone area 1. If this link were in area 0, ABR2 would pick the best route to reach ABR3 as being ABR2 – ABR1 – ABR3, choosing the lower-cost route.
This loop-prevention rule has some even more interesting side effects for internal routers. Again in Figure 9-10, consider the routes calculated by internal Router R2 to reach subnet 1. R2 learns a type 3 LSA for subnet 1 from ABR1, with the cost listed as 2. To calculate the total cost for using ABR1 to reach subnet 1, R2 adds its cost to reach ABR1 (cost 2), totaling cost 4. Likewise, R2 learns a type 3 LSA for subnet 1 from ABR2, with cost 101.
ABR1
ABR2
R1
R2
ABR3

Key Topic
This section covers the core OSPF configuration commands, along with the OSPF con- figuration topics not already covered previously in the chapter. (If you happened to skip the earlier parts of this chapter, planning to review OSPF configuration, make sure to go back and look at the earlier examples in the chapter. These examples cover OSPF stubby area configuration, OSPF network types, plus OSPF neighbor and priority commands.)
Example 9-8 shows configuration for the routers in Figure 9-5, with the following design goals in mind:
  • Proving that OSPF process IDs do not have to match on separate routers, though best practice recommends using the same process IDs across the network
  • Using the network command to match interfaces, thereby triggering neighbor dis- covery inside network 10.0.0.0
  • Configuring S1’s RID as 7.7.7.7
  • Setting priorities on the backbone LAN to favor S1 and S2 to become the DR/BDR
  • Configuring a minimal dead interval of 1 second, with hello multiplier of 4, yielding a 250-ms hello interval on the backbone LAN
R2 calculates its cost to reach ABR2 (cost 1) and adds that to 101 to arrive at cost 102 for this alternative route. As a result, R2 picks the route through ABR1 as the best route.
However, the story gets even more interesting with the topology shown in Figure 9-10. R2’s next-hop router for the R2 – ABR2 – ABR1 – ABR3 path is ABR2. So, R2 forwards packets destined to subnet 1 to ABR2 next. However, as noted just a few paragraphs ago, ABR2’s route to reach subnet 1 points directly to ABR3. As a result, packets sent by R2, destined to subnet 1, actually take the path from R2 – ABR2 – ABR3. As you can see, these decisions can result in arguably suboptimal routes, and even asymmetric routes, as would be the case in this particular example.f 

Open Stack

keyStone :
======

Authentication, Autherization for users, services, end points.
It uses tokens to authenticate and maintain session information.

Tenants are logically seperated containers within your openstack cloud.

Glance:
=====

Used to store and manage guest images.
Images can be managed globally and per tenant
Users can be mange do upload specific images

Nova:
====
Compute platform to run guest machines
boots instances from our glance images
Multi-Hypervisor Support
Currently require seperate nova instanaces per hypervisor
Nova is our management platform per hypervisor

* Regions are logical group of openstack services
*Availability zones are group of open stack nova end points based on location
*Aggregates are group of openstack nova endpoints based on characteristics like SSD backed, 10Gbe

Nova Networking:

Networking inside Nova provides great feature set of traditional virtualised networking
- L2/L3, DHCP

Different types of IP networks are supported with NOVA

- Flat Networking
      Dedicated subnet with ip information injected while booting
- Flat DHCP
     Allocated ip addresses to instances from a dedicated subnet using dnsmasq
-VLAN Manager
     Tenant is allocated a VLAN and IP range

Floating IP address while talking to public addresses






Wednesday, 30 August 2017

TCP Initiation and Header field functions

PSH FLAG - Telnet application 

TCP Parameter Exchange
In addition to initial sequence numbers, SYN messages also are designed to convey important parameters about how the connection should operate. TCP includes a flexible scheme for carrying these parameters, in the form of a variable-length Options field in the TCP segment format that can be expanded to carry multiple parameters. Only a single parameter is defined in TCP 793 to be exchanged during connection setup: Maximum Segment Size (MSS)The significance of this parameter is explained in the TCP data transfer section.
Each device sends the other the MSS that it wants to use for the connection, if it wishes to use a non-default value. When receiving the SYN, the server records the MSS value that the client sent, and will never send a segment larger than that value to the client. The client does the same for the server. The client and server MSS values are independent, so a connection can be established where the client can receive larger segments than the server or vice-versa.
Later RFCs have defined additional parameters that may be exchanged during connection setup. Some of these include:
  • Window Scale Factor: Allows a pair of devices to specify larger window sizes than would normally be possible given the 16-bit size of the TCP Window field.
  • Selective Acknowledgment Permitted: Allows a pair of devices to use the optional selective acknowledgment feature to allow only certain lost segments to be retransmitted.
  • Alternate Checksum Method: Lets devices specify an alternative method of performing checksums than the standard TCP mechanism.

TCP Header Field Functions
The price we pay for this flexibility is that the TCP header is large: 20 bytes for regular segments and more for those carrying options. This is one of the reasons why some protocols prefer to use UDP if they don't need TCP's features. The TCP header fields are used for the following general purposes:
  • Process Addressing: The processes on the source and destination devices are identified using port numbers.
  • Sliding Window System Implementation: Sequence numbers, acknowledgment numbers and window size fields implement the TCP sliding window system.
  • Control Bits and Fields: Special bits that implement various control functions, and fields that carry pointers and other data needed for them.
  • Carrying Data: The Data field carries the actual bytes of data being sent between devices.
  • Miscellaneous Functions: A checksum for data protection and options for connection setup.

Tuesday, 29 August 2017

TCP Sliding Window

http://www.tcpipguide.com/free/t_TCPSlidingWindowAcknowledgmentSystemForDataTranspo-6.htm. If a transmission is not acknowledged after a period of time, it is retransmitted by its sender. This system is called positive acknowledgment with retransmission (PAR)


At any point in time we can take a “snapshot” of the process. If we do, we can conceptually divide the bytes that the sending TCP has in its buffer into four categories, viewed as a timeline (Figure 206):
  1. Bytes Sent And Acknowledged: The earliest bytes in the stream will have been sent and acknowledged. These are basically “accomplished” from the standpoint of the device sending data. For example, let's suppose that 31 bytes of data have already been send and acknowledged. These would fall into Category #1.
  2. Bytes Sent But Not Yet Acknowledged: These are the bytes that the device has sent but for which it has not yet received an acknowledgment. The sender cannot consider these “accomplished” until they are acknowledged. Let's say there are 14 bytes here, in Category #2.
  3. Bytes Not Yet Sent For Which Recipient Is Ready: These are bytes that have not yet been sent, but which the recipient has room for based on its most recent communication to the sender of how many bytes it is willing to handle at once. The sender will try to send these immediately (subject to certain algorithmic restrictions we'll explore later). Suppose there are 6 bytes in Category #3.
  4. Bytes Not Yet Sent For Which Recipient Is Not Ready: These are the bytes further “down the stream” which the sender is not yet allowed to send because the receiver is not ready. There are 44 bytes in Category #4.