Post on 13-Aug-2020
B.E/B.Tech. DEGREE EXAMINATION ,APIRL/MAY 2017
Fourth/Fifth/Sixth/seventh/Eighth semester
Computer Science and Engineering
CS 6551- COMPUTER NETWORKS
(REGUALTION 2013)
TIME :3 HOURS Maximum:100 Marks
Answer All Questions
PART A-(10X2=20 Marks)
1. Distinguish between packet switched and circuit switched networks
2. What is meant by bit stuffing? Give an example.
3. State the functions of bridges.
4. When is ICMP redirect message used?
5. How do routers differentiate the incoming unicast, muliticast and broadcast IP
packets.
6. Why is IPV4 to IVP6 transition required?
7. List the advantages of connection oriented services over connection less services.
8. How do fast retransmit mechanism of TCP works?
9. State the usage of conditional get in HTTP.
10. Present the information contained in a DNS resource record.
PART B-(5X13=65 marks)
11. (a) (i) Explain the challenges faced in building a networks. (10)
(ii) Obtain the 4-bit CRC code for the data bit sequence 10011011100 using the
polynomial X4+X2+1. (3)
(Or)
(b) (i) with a protocol graph, explain the architecture of internet. (7)
(ii) Consider a bus LAN with a number of equally spaced stations with a data rate of 9
Mbps and a bus length of 1 km. What is the mean time to send a frame of 500 bits to
another station ,measured from the beginning of transmission to the end of reception ?
Assume a propagation speed of 150 m/s. If two stations begin to monitor and transmit
at the same time, how long does it need to wait before an interference is noticed? (6)
12. (a) (i) Discuss the working of CSMA/CD protocol. (6)
(ii) Explain the functions of MAC layer present in IEEE 802.11 with necessary
diagrams. (7)
(Or)
(b) (i)consider sending a 3500-byte datagram that has arrived at a router R1 that needs
to be sent over a link that has an MTU size of 1000 bytes to R2.then it has to traverse a
link with an MTU of 600 bytes. Let the identification number of the original datagram
be 465. How many fragments are delivered at the destination? Show the parameters
associated with each of these fragments. (6)
(ii)Explain the working of DHCP protocol with its header format. (7)
13. (a) Explain in detail the operation of OSPF protocol by considering a suitable
network.(13)
(Or)
(b)Explain the working of protocol Independent Multi-cast (PIM) in detail.(13)
14. (a)(i) Explain the adaptive flow control and retransmission techniques used in TCP.
(8)
(ii) With TCPs slow start and AIMD for congestion control ,show how the window
size will vary for a transmission where every 5Th packet is lost. Assume an advertised
window size of 50 MSS. (5)
(or)
(b)(i) Explain congestion avoidance using random early detection in transport layer
with an example.
(ii) Explain the differentiate services operation of QOS in detail. (6)
15. (a)(i) Describe how SMTP transfers message from one host to another with suitable
illustration (6)
(ii) Explain IMAP with its state transition diagram (7)
(or)
(b)List the elements of networks management and explain in operation os SNMP
protocol in detail(8)
(ii) Discuss the functions performed by of DNS.Give example (5)
PART C-(1X15= 15 marks)
16. (a)(i) Draw the format of TCP packet leader and explain each of its field (10)
(ii) Specify the justification for having variable field lengths for the fields in the TCP
header. (5)
(or)
(b) Illustrate the sequence of events and the respective protocols involved while
accessing a web page from a machine when it is connected with internet for first time
(15)
Answer All Questions
PART A-(10X2=20 Marks)
1. Distinguish between packet switched and circuit switched networks
Circuit SwitchingPacket
Switching(Datagram type)
Packet Switching(Virtual Circuit type)
Dedicated path No Dedicated path No Dedicated pathPath is established for entire conversation
Route is established for each packet
Route is established for entire conversation
Call setup delay packet transmission delaycall setup delay as well as packet transmission delay
Overload may block call setup
Overload increases packet delay
Overload may block call setup and increases packet delay
Fixed bandwidth Dynamic bandwidth Dynamic bandwidthNo overhead bits after call setup
overhead bits in each packet overhead bits in each packet
2. What is meant by bit stuffing? Give an example.
Bit stuffing is the process of adding one extra zero whenever there are five
consecutive.
Example: a standard HDLC packet begins and ends with 01111110. ... Bit stuffing is
defined by some to include bit padding, which is the addition of bits to a transmission
to make the transmission unit conform to a standard size, but is distinct from bit
robbing, a type of in-band signalling. Ones so the receiver will not consider data as
flag.
3. State the functions of bridges.
A bridge is a device that connects and passes packets between two network segments
that use the same communications protocol. Bridges operate at the data link layer
(layer 2) of the OSI reference model. A bridge will filter, forward or flood an
incoming frame based on the MAC address of that frame.
4. When is ICMP redirect message used?
Internet Control Message Protocol is a collection of error messages that are sent back
to the source host whenever a router or host is unable to process an IP datagram
successfully.
5. How do routers differentiate the incoming unicast, muliticast and broadcast IP
packets.
A Unicast transmission/stream sends IP packets to a single recipient on a network. A
Multicast transmission sends IP packets to a group of hosts on a network. If the
streaming video is to be distributed to a single destination, then you would start a
Unicast stream by setting the destination IP address and port on the AVN equal to the
destination’s values. If you want to view the stream at multiple concurrent locations,
then you would set the AVN’s destination IP address to a valid Multicast IP address
(224.0.0.0 – 239.255.255.255). Note that while the Multicast IP address range is from
224.0.0.0 – 239.255.255.255, the first octet (224.xxx.xxx.xxx) is generally reserved
for administration.
6. Why is IPV4 to IVP6 transition required?
Dual Stack Routers
Tunneling
NAT Protocol Translation
7. List the advantages of connection oriented services over connection less services.
Guarantees reliable
in-order delivery of message.
TCP is a full-duplex protocol.
TCP provides process-to-process communication.
Has built-in congestion-control mechanism.
Ensures flow control, as sliding window forms heart of TCP operation.
Some well-known TCP ports are 21–FTP, 23– TELNET, 25–SMTP, 80–
HTTP, etc.
Sending TCP buffers bytes in send buffer and transmits data unit as segments.
Segments are stored in receive buffer at the other end for application to read.
8. How do fast retransmit mechanism of TCP works?
Fast retransmit is a heuristic that sometimes triggers the retransmission of a
dropped packet before the RTO period is up.
It is worth to make things clear that fast retransmit does not supplant timeout
mechanism.
The timeout mechanism actives normally for a small window size, where
packets in transit are not enough to cause fast retransmit.
TCP can employ fast retransmit only in a large window size to enhance its
performance and link utilization.
The idea of fast retransmit is straightforward.
9. State the usage of conditional get in HTTP.
A conditional GET is an HTTP GET request that may return an HTTP 304 response
(instead of HTTP 200). An HTTP 304 response indicates that the resource has not
been modified since the previous GET, and so the resource is not returned to the client
in such a response.
10. Present the information contained in a DNS resource record.
resource record is a name-to-value binding, or more specifically, a 5-tuple that
contains the following fields:
< Name, Value, Type, Class, TTL >
The Name and Value fields are exactly what you would expect, while the Type field
specifies how the Value should be interpreted. For example, Type=A indicates that
the Value is in IP address. Thus, records implement the name-to-address mapping we
have been assuming.
Other record types include
NS: The Value field gives the domain name for a host is running a name server that
knows how to resolve names within the specified domain.
CNAME: the Value field gives the canonical name for a particular host; it is used to
define aliases.
MX: The Value field gives the domain name for a host that is running a mail server
that accepts the messages for the specified domain.
The Class field was included to allow entities other than the NIC to define useful
record
Finally, the TTL field shows how long this resource record is valid. It is used by
servers that cache resource records from other servers; when the TTL expires.
PART B-(5X13=65 marks)
11. (a) (i) Explain the challenges faced in building a networks. (10)
Explain the following concept
Scalable Connectivity
Cost-Effective Resource Sharing
Support for Common Services
Manageability
(ii) Obtain the 4-bit CRC code for the data bit sequence 10011011100 using
the polynomial X4+X2+1. (3)
(Or)
(b) (i) with a protocol graph, explain the architecture of internet. (7)
Internet uses TCP/IP protocol suite, also known as Internet suite. This defines Internet Model which contains four layered architecture. OSI Model is general communication model but Internet Model is
what the internet uses for all its communication. The internet is independent of its underlying network architecture so is its Model. This model has the following layers:
Application Layer: This layer defines the protocol which enables user to interact with the network. For example, FTP, HTTP etc.
Transport Layer: This layer defines how data should flow between hosts. Major protocol at this layer is Transmission Control Protocol (TCP). This layer ensures data delivered between
hosts is in-order and is responsible for end-to-end delivery.
Internet Layer: Internet Protocol (IP) works on this layer. This layer facilitates host addressing and recognition. This layer defines routing.
Link Layer: This layer provides mechanism of sending and receiving actual data.Unlike its OSI Model counterpart, this layer is independent of underlying network architecture and hardware.
The internet architecture evolved out of experiences with an earlier packet switched networkcalled the ARPANET. Both the Internet and the ARPANET were funded by the AdvancedResearch Projects Agency (ARPA).The Internet and ARPANET were around before the OSI architecture, and the experience gainedfrom building them was a major influence on the OSI reference model. Instead of having seven
layers, a four layer model is often used in Internet.
Internet uses TCP/IP protocol suite, also known as Internet suite. This defines Internet Model which contains four layered architecture. OSI Model is general communication model but Internet Model is what the internet uses for all its communication. The internet is independentof its underlying network architecture so is its Model. This model has the following layers:
Application Layer: This layer defines the protocol which enables user to interact with the network.For example, FTP, HTTP etc.
Transport Layer: This layer defines how data should flow between hosts. Major protocol at this layer is Transmission Control Protocol (TCP). This layer ensures data delivered betweenhosts is in-order and is responsible for end-to-end delivery.
Internet Layer: Internet Protocol (IP) works on this layer. This layer facilitates host addressing and recognition. This layer defines routing.
Link Layer: This layer provides mechanism of sending and receiving actual data.Unlike its OSI Model counterpart, this layer is independent of underlying network architecture and hardware.
At the lowest level are a wide variety of network protocols, denoted NET1, NET2 and so on. The second layer consists of a single protocol the Internet Protocol IP. It supports the interconnection of multiple networking technologies into a single, logical internetwork. The third layer contains two main protocols the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). TCP provides a reliable byte stream channel, and UDP provides unreliable datagram delivery channel. They are called as end to end protocol they can also be referred as transport protocols. Running above the transport layer, a range of application protocols such as FTP, TFTP, Telnet, and SMTP that enable the interoperation of popular applications.
(ii) Consider a bus LAN with a number of equally spaced stations with a data
rate of 9 Mbps and a bus length of 1 km. What is the mean time to send a frame
of 500 bits to another station ,measured from the beginning of transmission to
the end of reception ? Assume a propagation speed of 150 m/s. If two stations
begin to monitor and transmit at the same time, how long does it need to wait
before an interference is noticed? (6)
We asume that the distance between two stations is 500M
Mean time to send = propogation time + transmission time
=500m. / 150msec. + 500bits / 10 000 000 bps
= 3.3 msec. + 50 msec. = 53.3 msec.
Tinterface =(250m + 250m) / 200m/msec = 2.5 msec
12. (a) (i) Discuss the working of CSMA/CD protocol. (6)
The receiver side of the Ethernet protocol is simple; the real smarts are implemented at
the sender’s side. The transmitter algorithm is defined as follows:
When the adaptor has a frame to send and the line is busy, it waits for the line to go
idle and then transmits immediately.
The Ethernet is said to be a 1-persistent protocol because an adaptor with a frame to
send transmits with probability 0<=p<=1 after a line becomes idle, and defers with
probability q=1-p.Because there is no centralized control it is possible for two (or
more) adaptors to begin transmitting at the same time, either because both found the
line to be idle or because both had been waiting for a busy line to become idle.
When this happens, the two (or more) frames are said to collide on the network. Each
sender, because the Ethernet supports collision detection, is able to determine that a
collision is in progress. At the moment an adaptor detects that is frame is colliding
with another, it first makes sure to transmit a sure to transmit a 32-bit jamming
sequence and then stops the transmission.
Thus, a transmitter will minimally send 96 bits in the case of a collision: 64-bit
preamble plus 32-bit jamming sequence. One way that an adaptor will send only 96-
bits which is sometimes called a runt frame is if the two hosts are close to each other.
Had the two hosts been farther apart, they would have had to transmit longer, and thus
send more bits, before detecting the collision.
In fact, the worst-case scenario happens when the two hosts are at opposite ends of the
Ethernet. To know for sure that the frame it just sent did not collide with another
frame, the transmitter may need to send as many as 512 bits.
Not coincidentally, every Ethernet frame must be at least 512 bits (64 bytes)long: 14
bytes of header plus 46 bytes of data plus 4 bytes of CRC.
Where hosts A and B are at opposite ends of the network. Suppose host A begins
transmitting a frame at time t, as shown in (a). it takes it one link latency (let’s denote
the latency as d) for the frame to reach host B.
Thus, the first bit of A’s frame arrives at B at time t+d, as shown in (b). Suppose an
instant before host A’s frame arrives (i.e., B still sees and idle line), host B begins to
transmit its own frame.
B’s frame will immediately collide with A’s frame, and this collision will be detected
by host B(c). host B will send the 32-bit jamming sequence, as described above.(B’s
frame will be a runt).
Unfortunately, host A will not know that the collision occurred until B’s frame
reaches it, which will happen one link latency later, at time t+2xd, as shown in (d).
Host A must continue to transmit until this time in order to detect the collision. In
other words, host A must transmit for 2xd should be sure that it detects all possible
collisions.
Considering that a maximally configured Ethernet is 2,500 m long, and that there may
be up to four repeaters between any two hosts, the round-trip delay has been
determined to be 51.2 microseconds, which on a 10-Mbps Ethernet corresponds to
512 bits.
The other way to look at this situation is that we need to limit the Ethernet’s
maximum latency to a fairly small value (e.g., 512micro seconds) for the access
algorithm to work; hence, an Ethernet’s maximum length must be something on the
order of 2,500m.
Once an adaptor has detected a collision and stopped its transmission, it waits certain
amount of time and tries again. Each time it tries to transmit but fails, the adaptor
doubles the amount of time it waits before trying again.
This strategy of doubling the delay interval between each retransmission attempt is a
general technique known as exponential back off. More precisely, the adaptor first
delays either 0 or 51.2 microseconds, selected at random. If this effort fails, it then
waits 0, 51.2, 102.4, or 153.6 microseconds (selected randomly) before trying again;
this is kx51.2 for k=0...2^3-1, again selected at random.
In general, the algorithm randomly selects a k between 0 and 2^n-1 and waits kx51.2
microseconds, where n is the number of collisions experienced so far. The adaptor
gives up after a given number of tries and reports a transmit error to the host. Adaptor
typically retry up to 16 times, although the back off algorithm caps n in the above
formula at 10.
(ii) Explain the functions of MAC layer present in IEEE 802.11 with
necessary diagrams. (7)
Answer: Wi-Fi explanation
The technique for selecting an AP is called scanning and involves the following four steps:
1. The node sends a Probe frame.
2. All APs within reach reply with a Probe Response frame.
3. The node selects one of the access points and sends that AP an Association Request frame.
4. The AP replies with an Association Response frame.
A node engages this protocol whenever it joins the network, as well as when it becomes
unhappy with its current AP. This might happen, for example, because the signal from its
current AP has weakened due to the node moving away from it. Whenever a node acquires a
new AP, the new AP notifies the old AP of the change (this happens in step 4) via the
distribution system.
Fig accessed point connect to distributed system
Node modify
Consider the situation shown in Figure 2.33, where node C moves from the cell serviced by
AP-1 to the cell serviced by AP-2. As it moves, it sends Probe frames, which eventually
result in Probe Response frames from AP-2. At some point, C prefers AP-2 over AP-1, and
so it associates itself with that access point.
The mechanism just described is called active scanning since the node is actively searching
for an access point. APs also periodically send a Beacon frame that advertises the capabilities
of the access point; these include the transmission rates supported by the AP. This is called
passive scanning, and a node can change to this AP based on the Beacon frame simply by
sending an Association Request frame back to the access point.
Frame Format
Fig 801.11 Frame format
Most of the 802.11 frame format, which is depicted in Figure , is exactly what we
would expect. The frame contains the source and destination node addresses, each of
which is 48 bits long; up to 2312 bytes of data; and a 32-bit CRC. The Control field
contains three subfields of interest (not shown): a 6-bit Type field that indicates
whether the frame carries data, is an RTS or CTS frame, or is being used by the
scanning algorithm, and a pair of 1-bit fields—called ToDS and FromDS—that are
described below. The peculiar thing about the 802.11 frame format is that it contains
four, rather than two, addresses. How these addresses are interpreted depends on the
settings of the ToDS and FromDS bits in the frame’s Control field. This is to account
for the possibility that the frame had to be forwarded across the distribution system,
which would mean that the original sender is not necessarily the same as the most
recent transmitting node. Similar reasoning applies to the destination address. In the
simplest case, when one node is sending directly to another, both the DS bits are 0,
Addr1 identifies the target node, and Addr2 identifies the source node. In the most
complex case, both DS bits are set to 1, indicating that the message went from a
wireless node onto the distribution system, and then from the distribution system to
another wireless node.With both bits set, Addr1 identifies the ultimate destination,
Addr2 identifies the immediate sender (the one that forwarded the frame from the
distribution system to the ultimate destination), Addr3 identifies the intermediate
destination (the one that accepted the frame from a wireless node and forwarded it
across the distribution system), and Addr4 identifies the original source.
(Or)
(b) (i)consider sending a 3500-byte datagram that has arrived at a router R1 that
needs to be sent over a link that has an MTU size of 1000 bytes to R2.then it has
to traverse a link with an MTU of 600 bytes. Let the identification number of the
original datagram be 465. How many fragments are delivered at the destination?
Show the parameters associated with each of these fragments. (6)
(ii)Explain the working of DHCP protocol with its header format. (7)
DHCP uses the concept of a relay agent. There is at least one relay agent on each network,
and it is configured with just one piece of information: the IP address of the DHCP server.
2. When a relay agent receives a DHCPDISCOVER message, it unicasts it to the DHCP
server and awaits the response, which it will then send back to the requesting client.The
process of relaying a message from a host to a remote DHCP server is shown in Figure
DHCP Protocol Format:
Operation―specifies type of DHCP packet.
Xid―specifies the transaction id.
ciaddr―specifies client IP address in case of DHCPREQUEST
yiaddr― known as your IP address, filled by DHCP server.
siaddr―contains IP address of the DHCP server.
giaddr―contains IP address of the Gateway or relay agent.
chaddr―contains hardware (physical) address of the client.
options―contains information such as lease duration, default route, DNS server, etc.
Dynamic Address Allocation
1. DHCP server is configured with range of addresses to be assigned to hosts on demand. To
contact DHCP server, client broadcasts a DHCPDISCOVER message with IP address
255.255.255.255 and it's physical address placed in chaddr field.
2. DHCP server selects an unassigned IP address for yiaddr field and adds an entry to
dynamic database along with client's physical address.
3. DHCP server sends DHCPOFFER message containing client's IP and physical address,
server IP address and options.
4. Client sends a DHCPREQUEST message, requesting the offered address.
5. Based on transaction id, the DHCP server acknowledges with a DHCPACK message.
6. When lease period expires, client attempts to renew. It’s up to server to accept or reject it.
Disadvantage:
It introduces some more complexity into network management, since it makes the binding
between physical hosts and IP addresses much more dynamic.
13. (a) Explain in detail the operation of OSPF protocol by considering a
suitable network. (13)
Link-state routing is the second major class of intradomain routing protocol. The starting assumptions for link-state routing are rather similar to those for distance vector routing. Each node is assumed to be capable of finding out the state of the link to its neighbors (up or down) and the cost of each link. Again, we want to provide each node with enough information to enable it to find the least-cost path to any destination. The basic idea behind link-state protocols is very simple: Every node knows how to reach its directly connected neighbors, and if we make sure that the totality of this knowledge is disseminated to every node, then every node will have enough knowledge of the network to build a complete map of the network. Thus, link-state routing protocols rely on two mechanisms: reliable dissemination of link-state information, and the calculation of routes from the sum of all the accumulated link-state knowledge.
Reliable Flooding
Reliable flooding is the process of making sure that all the nodes participating in the routing
protocol get a copy of the link-state information from all the other nodes. As the
term―flooding suggests, the basic idea is for a node to send its link-state information out on
all of its directly connected links, with each node that receives this information forwarding it
out on all of its links.This process continues until the information has reached all the nodes in
the network. More precisely, each node creates an update packet, also called a link-state
packet (LSP), that contains the following information.
The ID of the node that created the LSP
A list of directly connected neighbors of that node, with the cost of the link to each
one
A sequence number
A time to live for this packet.
The first two items are needed to enable route calculation; the last two are used to make the
process of flooding the packet to all nodes reliable. Reliability includes making sure that you
have the most recent copy of the information, since there may be multiple, contradictory
LSPs from one node traversing the network
In the figure shows an LSP being flooded in a small network. Each node becomes shaded as
it stores the new LSP. In Figure (a) the LSP arrives at node X, which sends it to neighbors A
and in Figure b) A and C do not send it back to X, but send it on to B. Since B receives two
identical copies of the LSP, it will accept whichever arrived first and ignore the second as a
duplicate. It then passes the LSP on to D, who has no neighbors to flood it to, and the process
is complete.
Just as in RIP, each node generates LSPs under two circumstances. Either the expiry of a
periodic timer or a change in topology can cause a node to generate a new LSP. However, the
only topology-based reason for a node to generate an LSP is if one of its directly connected
links or immediate neighbors has gone down.
The failure of a link can be detected in some cases by the link-layer protocol. The demise of a
neighbor or loss of connectivity to that neighbor can be detected using periodic hello packets.
Each node sends these to its immediate neighbors at defined intervals. If a sufficiently long
time passes without receipt of a ―helloǁ from a neighbor, the link to that neighbor will be
declared down, and a new LSP will be generated to reflect this fact.
One of the important design goals of a link-state protocol’s flooding mechanism is that the
newest information must be flooded to all nodes as quickly as possible, while old information
must be removed from the network and not allowed to circulate. In addition, it is
clearly desirable to minimize the total amount of routing traffic that is sent around the
network after all, this is just overhead from the perspective of those who actually use the
network for their applications. The next few paragraphs describe some of the ways that these
goals are accomplished.
One easy way to reduce overhead is to avoid generating LSPs unless absolutely necessary.
This can be done by using very long timers often on the order of hours for the periodic
generation of LSPs. Given that the flooding protocol is truly reliable when topology Changes,
it is safe to assume that messages saying nothing has changed do not need to be sent very
often.
LSPs also carry a time to live. This is used to ensure that old link-state information is
eventually removed from the network. A node always decrements the TTL of a newly
received LSP before flooding it to its neighbors. It also ―agesǁ the LSP while it is stored in
the node. When the TTL reaches 0, the node refloods the LSP with a TTL of 0, which is
interpreted by all the nodes in the network as a signal to delete that LSP.
Route CalculationIn practice, each switch computes its routing table directly from the LSPs it has collected
using a realization of Dijkstra’s algorithm called the forward search algorithm. Specifically,
each switch maintains two lists, known as Tentative and Confirmed. Each of these lists
contains a set of entries of the form (Destination, Cost, NextHop).
The algorithm works as follows:
Initialize the Confirmed list with an entry for myself; this entry has a cost of 0.
For the node just added to the Confirmed list in the previous step, call it node Next, select
its LSP.
For each neighbor (Neighbor) of Next, calculate the cost (Cost) to reach this Neighbor as
the sum of the cost from myself to Next and from Next to Neighbour
(a) If Neighbor is currently on neither the Confirmed nor the Tentative list, then add
(Neighbor, Cost, NextHop) to the Tentative list, where NextHop is the direction I go to reach
Next.
(b) If Neighbor is currently on the Tentative list, and the Cost is less than the currently listed
cost for Neighbor, then replace the current entry with (Neighbor, Cost, NextHop), where
NextHop is the direction I go to reach Next. 4 If the Tentative list is empty, stop. Otherwise,
pick the entry from the Tentative list with the lowest cost, move it to the Confirmed list, and
return to step 2. Link-state routing: an example network.
Table traces the steps for building the routing table for node D. We denote the two outputs of
D by using the names of the nodes to which they connect, B and C. Note the way the
algorithm seems to head off on false leads (like the 11-unit cost path to B that was the first
addition to the Tentative list) but ends up with the least-cost paths to all nodes.
The Open Shortest Path First Protocol (OSPF)
One of the most widely used link-state routing protocols is OSPF. The first word, Open,refers to the fact that it is an open, non-proprietary standard, created under the auspices of the IETF. The ―SPF part comes from an alternative name for link state routing.
OSPF adds quite a number of features to the basic link-state algorithm described above, including the following:
Authentication of routing messages: This is a nice feature, since it is all too common for some misconfigured host to decide that it can reach every host in the universe at a cost of 0. When the host advertises this fact, every router in the surrounding neighbourhood updates
its forwarding tables to point to that host, and said host receives a vast amount of data that, in reality, it has no idea what to do with. This is not a strong enough form of authentication to prevent dedicated malicious users, but it alleviates many problems caused by misconfiguration.
Additional hierarchy: Hierarchy is one of the fundamental tools used to make systems more scalable. OSPF introduces another layer of hierarchy into routing by allowing a domain to be partitioned into areas. This means that a router within a domain does not necessarily
need to know how to reach every network within that domain— it may be sufficient for it to know only how to get to the right area. Thus, there is a reduction in the amount of information that must be transmitted to and stored in each node.
Load balancing: OSPF allows multiple routes to the same place to be assigned the same cost and will cause traffic to be distributed evenly over those route.
OSPF header format
The Version field is currently set to 2, and the Type field may take the values 1 through 5.
The SourceAddr identifies the sender of the message, and the AreaId is a 32-bit identifier of
the area in which the node is located. The entire packet, except the authentication data, is
protected by a 16-bit checksum using the same algorithm as the IP header. The
Authentication type is 0 if no authentication is used; otherwise it may be 1, implying a simple
password is used, or 2, which indicates that a cryptographic authentication checksum. In the
latter cases the Authentication field carries the password or cryptographic checksum.
Of the five OSPF message types, type 1 is the ―hello message, which a router sends to its
peers to notify them that it is still alive and connected as described above. The remaining
types are used to request, send, and acknowledge the receipt of linkstate messages. The basic
building block of link-state messages in OSPF is known as the link-state advertisement
(LSA).
OSPF link-state advertisement.
The above figure shows the packet format for a ―type 1 link-state advertisement.
Type 1 LSAs advertise the cost of links between routers. Type 2 LSAs are used to advertise
networks to which the advertising router is connected, while other types are used to support
additional hierarchy as described in the next section. Many fields in the LSA should be
familiar from the preceding discussion.
The LS Age is the equivalent of a time to live, except that it counts up and the LSA expires
when the age reaches a defined maximum value. The Type field tells us that this is a type 1
LSA. In a type 1 LSA, the Link-state ID and the Advertising router field are identical.
Each carries a 32-bit identifier for the router that created this LSA. While a number of
assignment strategies may be used to assign this ID, it is essential that it be unique in the
routing domain and that a given router consistently uses the same router ID. One way to pick
a router ID that meets these requirements would be to pick the lowest IP address among all
the IP addresses assigned to that router. (Recall that a router may Have a different IP address
on each of its interfaces.).The LS sequence number is used exactly as described above, to
detect old or duplicate LSAs. The LS checksum it is of course used to Verify that data has not
been corrupted. It covers all fields in the packet except LS Age, so that it is not necessary to
recompute a checksum every time LS Age is incremented. Length is the length in bytes of the
Now we get to the actual link-state information.
This is made a little complicated by the presence of TOS (type of service) information.
Ignoring that for a moment, each link in the LSA is represented by a Link ID, some Link
Data, and a metric. The first two of these fields identify the link; a common way to do this
would be to use the router ID of the router at the far end of the link as the Link ID, and then
use the Link Data to disambiguate among multiple parallel links if necessary. The metric is of
course the cost of the link. Type tells us something about the link, for example, if it is a point-
to-point link. The TOS information is present to allow OSPF to choose different routes for IP
packets based on the value in their TOS field. Instead of assigning a single metric to a link, it
is possible to assign different metrics depending on the TOS value of the data.
(Or)
(b)Explain the working of protocol Independent Multi-cast (PIM) in detail. (13)
Protocol Independent Multicast, or PIM, was developed in response to the scaling problems
of earlier multicast routing protocols. In particular, it was recognized that the existing
protocols did not scale well in environments where a relatively small proportion of routers
want to receive traffic for a certain group. For example, broadcasting traffic to all routers
until they explicitly ask to be removed from the distribution is not a good design choice if
most routers don’t want to receive the traffic in the first place. This situation is sufficiently
common that PIM divides the problem space into sparse mode and dense mode, where sparse
and dense refer to the proportion of routers that will want the multicast. PIM dense
mode (PIM-DM) uses a flood-and-prune algorithm like DVMRP and suffers from the same
scalability problem. PIM sparse mode (PIM-SM) has become the dominant multicast routing
protocol and is the focus of our discussion here. The “protocol independent” aspect of PIM,
by the way, refers to the fact that, unlike earlier protocols such as DVMRP, PIM does not
depend on any particular sort of unicast routing—it can be used with any unicast routing
protocol, as we will see below.
In PIM-SM, routers explicitly join the multicast distribution tree using PIM protocol
messages known as Join messages. Note the contrast to DVMRP’s approach of creating a
broadcast tree first and then pruning the uninterested routers. The question that arises is
where to send those Join messages because, after all, any host (and any number of hosts)
could send to the multicast group. To address this, PIM-SM assigns to each group a special
router known as the rendezvous point (RP). In general, a number of routers in a domain are
configured to be candidate RPs, and PIM-SM defines a set of procedures by which all the
routers in a domain can agree on the router to use as the RP for a given group. These
procedures are rather complex, as they must deal with a wide variety of scenarios, such as the
failure of a candidate RP and the partitioning of a domain into two separate networks due to a
number of link or node fail- ures. For the rest of this discussion,we assume that all routers in
a domain know the unicast IP address of the RP for a given group. A multicast forwarding
tree is built as a result of routers sending Join messages to the RP. PIM-SM allows two types
of trees to be constructed: a shared tree, which may be used by all senders, and a source-
specific tree, which may be used only by a specific sending host. The normal mode of
operation creates the shared tree first, followed by one or more source-specific trees if there is
enough traffic to warrant it. Because
building trees installs state in the routers along the tree, it is important that the default is to
have only one tree for a group, not one for every sender to a group. When a router sends a
Join message toward the RP for a group G,it is sent using normal IP unicast transmission.
This is illustrated in Figure 4.14(a), inwhich router R4 is sending a Join to the rendezvous
point for some group. The initial Join message is “wildcarded”; that is, it applies to all
senders. A Join message clearly must pass through some sequence of routers before reaching
the RP (e.g., R2). Each router along the path looks at the Join and creates a forwarding table
entry for the shared tree, called a (*, G) entry (where * means “all senders”). To create the
forwarding table entry, it looks at the interface on which the Join arrived and marks that
interface as one on which it should forward data packets for this group. It then determines
which interface it will use to forward the Join toward the RP. This will be the only acceptable
interface for incoming packets sent to this group. It then forwards the Join toward the RP.
Eventually, the message arrives at the RP, completing the construction of the tree branch. The
shared tree thus constructed is shown as a solid line from the RP to R4 in Figure 4.14(a).
As more routers send Joins toward the RP, they cause new branches to be added to the tree,
as illustrated in Figure 4.14(b).Note that, in this case, the Join only needs to travel to R2,
which can add the new branch to the tree simply by adding a new outgoing interface to the
forwarding table entry created for this group. R2 need not forward the Join on to the RP.
Note also that the end result of this process is to build a tree whose root is the RP. At this
point, suppose a host wishes to send a message to the group. To do so, it constructs a packet
with the appropriate multicast group address as its destination and sends it to a router on its
local network known as the designated router (DR). Suppose the DR is R1 in Figure 4.14.
There is no state for this multicast group between R1 and the RP at this point, so instead of
simply forwarding the multicast packet, R1 tunnels it to the RP. That is, R1 encapsulates the
multicast packet inside a PIM Register message that it sends to the unicast IP address of the
RP. the RP receives the packet addressed to it, looks at the payload of the Register message,
and finds inside an IP packet addressed to the multicast address of this group. The RP, of
course, does know what to do with such a packet—it sends it out onto the shared tree of
which the RP is the root. In the example of Figure 4.14, this means that the RP sends the
packet on to R2, which is able to forward it on to R4 and R5. The complete delivery of a
packet from R1 to R4 and R5 is shown in Figure 4.15.We see the tunneled packet travel from
R1 to the RP with an extra IP header containing the unicast address of RP, and then the
multicast packet addressed to G making its way along the shared tree to R4 and R5.
At this point, we might be tempted to declare success, since all hosts can send to all receivers
this way. However, there is some bandwidth inefficiency and processing cost in the
encapsulation and decapsulation of packets on the way to the RP, so the RP forces knowledge
about this
group into the intervening routers so tunneling can be avoided. It sends a Join message
toward the sending host (Figure 4.14(c)). As this Join travels toward the host, it causes the
routers along the path (R3) to learn about the group, so that it will be possible for the DR to
send the packet to the group as native (i.e., not tunneled) multicast packets.
An important detail to note at this stage is that the Join message sent by the RP to the sending
host is specific to that sender, whereas the previous ones sent by R4 and R5 applied to all
senders. Thus, the effect of the new Join is to create sender-specific state in the routers
between the identified source and the RP. This is referred to as (S, G) state, since it applies to
one sender to one group, and contrasts with the (*, G) state that was installed between the
receivers and the RP that applies to all senders.
Thus, in Figure 4.14(c), we see a source-specific route from R1 to the RP (indicated by the
dashed line) and a tree that is valid for all senders from the RP to the receivers (indicated by
the solid line). The next possible optimization is to replace the entire shared tree with a
source-specific tree. This is desirable because the path from sender to receiver via the RP
might be significantly longer than the shortest possible path. This again is likely to be
triggered by a high data rate being observed from some sender. In this case, the router at the
downstream end of the tree—say, R4 in our example—sends a source-specific Join toward
the source. As it follows the shortest path toward the source, the routers along the way create
(S, G) state for this tree, and the result is a tree that has its root at the source, rather than the
RP. Assuming both R4 and R5 made the witch to the source-specific tree, we would end up
with the tree shown in Figure 4.14(d). Note that this tree no longer involves the RP at all.We
have removed the shared tree from this picture to simplify the diagram, but in reality all
routers with receivers for a group must stay on the shared tree in case new senders show up.
We can now see why PIM is protocol independent. All of its mechanisms for building and
maintaining trees take advantage of unicast routing without depending on any particular
unicast routing protocol.
The formation of trees is entirely determined by the paths that Join messagesfollow, which is
determined by the choice of shortest paths made by unicast routing. Thus, to be precise, PIM
is “unicast routing protocol independent,” as compared toDVMRP.Note that PIM is very
much bound up with the Internet Protocol—it is not protocol independent in terms of
network-layer protocols.
The design of PIM-SM again illustrates the challenges in building scalable networks and how
scalability is sometimes pitted against some sort of optimality. The shared tree is certainly
more scalable than a source specific tree, in the sense that it reduces the total state in routers
to be on the order of the number of groups rather than the number of senders times the
number of groups. However, the source-specific tree is likely to be necessary to achieve
efficient routing and effective use of link bandwidth.
14. (a)(i) Explain the adaptive flow control and retransmission techniques
used in TCP. (8)
TCP uses a variant of sliding window known as adaptive flow control that:
guarantees reliable delivery of data
ensures ordered delivery of data
enforces flow control at the sender
Receiver advertises its window size to the sender using AdvertisedWindow field.
Sender thus cannot have unacknowledged data greater than AdvertisedWindow.
Send Buffer Sending TCP maintains send buffer which contains 3 segments, acknowledged data, unacknowledged data and data to be transmitted.
Send buffer maintains three pointers LastByteAcked, LastByteSent, and LastByteWritten such that:
LastByteAcked ≤ LastByteSent ≤ LastByteWritten
A byte can be sent only after being written and only a sent byte can be acknowledged. Bytes to the left of LastByteAcked are not kept as it had been acknowledged.
Receive Buffer Receiving TCP maintains receive buffer to hold data even if it arrives out-of order. Receive buffer maintains three pointers namely LastByteRead,
NextByteExpected, and LastByteRcvd such that: LastByteRead < NextByteExpected ≤ LastByteRcvd + 1 A byte cannot be read until that byte and all preceding bytes have been received. If data is received in order, then NextByteExpected = LastByteRcvd + 1 Bytes to the left of Last ByteRead are not buffered, since it is read by the application.
Flow ControlSize of send and receive buffer is MaxSendBuffer and MaxRcvBuffer respectively.
o Sending TCP prevents overflowing of send buffer by maintaining LastByteWritten −LastByteAcked ≤ MaxSendBuff er
o Receiving TCP avoids overflowing its receive buffer by maintaining LastByteRcvd −LastByteRead ≤ MaxRcvBuffer
o Receiver throttles the sender by having AdvertisedWindow based on free space available for buffering.
o Sending TCP adheres to AdvertisedWindow by computing EffectiveWindow that limits how much data it should send.
o When data arrives, LastByteRcvd moves to its right and AdvertisedWindow shrinks. Receiver acknowledges only, if preceding bytes have arrived.
o AdvertisedWindow expands when data is read by the application. If data is read as fast as it arrives then AdvertisedWindow =
MaxRcvBuffer If data is read slowly, it eventually leads to a
AdvertisedWindow of size 0. AdvertisedWindow field is designed to allow sender to keep the pipe full.
Fast Sender vs Slow Receiver
If sender transmits at a higher rate, receiver's buffer gets filled up. Hence, dvertised
Windowshrinks, eventually to 0.
Receiver advertises a window of size 0, thus sender cannot transmit as it gets blocked.
When receiving process reads some data, those bytes are acknowledged and
AdvertisedWindow expands.
When an acknowledgement arrives for x bytes, LastByteAcked is incremented by x
and
send buffer space is freed accordingly to send further data.
ADAPTIVE RETRANSMISSION ALGORITHMS. (OR) HOW IS TIMEOUT
ESTIMATED IN TCP
TCP guarantees reliability through retransmission when ACK arrives after timeout.
Timeout is based on RTT, but it is highly variable for any two hosts on the internet.
Appropriate timeout is chosen using adaptive retransmission.
Original Algorithm
SampleRTT is the duration between sending a segment and arrival of its ACK.
EstimatedRTT is weighted average of previous estimate and current sample.
EstimatedRTT = α × EstimatedRTT + (1 − α) × SampleRTT
(where α is known as smoothening factor with value in the range 0.8–0.9)
Timeout is determined as twice the value of EstimatedRTT .
TimeOut = 2 × EstimatedRTT
In original TCP, timeout is thus computed as function of running average of RTT.
Karn/Partridge Algorithm
Flaw discovered in TCP original algorithm was that an ACK segment, acknowledges receipt
of data, not a transmission. When an ACK arrives after retransmission, it is impossible to
decide, whether to pair it with original or retransmitted segment for SampleRTT estimation.
If ACK is associated with original one, then SampleRTT becomes too large ACK is
associated with retransmission, then SampleRTT becomes too small Karn and Partridge
proposed that SampleRTT should be taken for segments that are sent only once, i.e, for
segments that are not retransmitted. Each time TCP retransmits, timeout is doubled, since loss
of segments is mostly due to congestion
(ii) With TCPs slow start and AIMD for congestion control, show how the
window size will vary for a transmission where every 5T packet is lost. Assume
an advertised window size of 50 MSS. (5)
(or)
(b)(i) Explain congestion avoidance using random early detection in transport
layer with an example.(7)
A second mechanism, called random early detection (RED), is similar to the DECbit scheme
in that each router is programmed to monitor its own queue length and, when it detects that
congestion is imminent, to notify the source to adjust its congestion window. RED, invented
by Sally Floyd and Van Jacobson in the early 1990s, differs from the DECbit scheme in
two major ways.
The first is that rather than explicitly sending a congestion notification message to the source,
RED is most commonly implemented such that it implicitly notifies the source of congestion
by dropping one of its packets.
The source is, therefore, effectively notified by the subsequent timeout or duplicate ACK. In
case you haven’t already guessed, RED is designed to be used in conjunction with TCP,
which currently detects congestion by means of timeouts (or some other means of detecting
packet loss such as duplicate ACKs). As the “early” part of the RED acronym suggests, the
gateway drops the packet earlier than it would have to, so as to notify the source that it should
decrease its congestion windowsooner than it would normally have. In other words, the router
drops a few packets before it has exhausted its buffer space completely, so as to cause the
source to slow down, with the hope that this will mean it does not have to drop lots of packets
later on. Note that RED could easily be adapted to work with an explicit feedback scheme
simply by marking a packet instead of dropping it, as discussed in the sidebar on Explicit
Congestion Notification.
The second difference between RED and DECbit is in the details of how RED decides when
to drop a packet and what packet it decides to drop. To understand the basic idea, consider a
simple FIFO queue. Rather than wait for the queue to become completely full and then be
forced to drop each arriving packet , we could decide to drop each arriving packet with some
drop probability whenever the queue length exceeds some drop level. This idea is called
early random drop. The RED algorithm defines the details of how to monitor the queue
length and when to drop a packet.
In the following paragraphs, we describe the RED algorithm as originally proposed by Floyd
and Jacobson. We note that several modifications have since been proposed both by the
inventors and by other researchers; some of these are discussed in Further Reading. However,
the key ideas are the same as those presented below, and most current implementations are
close to the algorithm that follows.
First, RED computes an average queue length using a weighted running average similar to
the one used in the original TCP timeout computation.
That is, AvgLen is computed as
AvgLen = (1−Weight)×AvgLen+Weight×SampleLen
where 0 < Weight < 1 and SampleLen is the length of the queue when a sample measurement
is made. In most software implementations, the queue length is measured every time a new
packet arrives at the gateway. In hardware, it might be calculated at some fixed sampling
interval. The reason for using an average queue length rather than an instantaneous one is that
it more accurately captures the notion of congestion.
Because of the bursty nature of Internet traffic, queues can become full very quickly and then
become empty again. If a queue is spending most of its time empty, then it’s probably not
appropriate to conclude that the router is congested and to tell the hosts to slow down. Thus,
the weighted running average calculation tries to detect long-lived congestion, as indicated
in the right-hand portion of Figure , by filtering out short-term changes in the queue length.
You can think of the running average as a low-pass filter, where Weight determines the time
constant of the filter.
The question of how we pick this time constant is discussed below. Second, RED has two
queue length thresholds that trigger certain activity: MinThreshold and MaxThreshold. When
a packet arrives at the gateway, RED compares the current AvgLen with these two
thresholds, according to the following rules:
if AvgLen _ MinThreshold
!queue the packet
if MinThreshold < AvgLen < MaxThreshold
!calculate probability P
!drop the arriving packet with probability P
if MaxThreshold _ AvgLen
!drop the arriving packet
If the average queue length is smaller than the lower threshold, no action is taken, and if the
average queue length is larger than the upper threshold, then the packet is always dropped. If
the average queue length is between the two thresholds, then the newly arriving packet is
dropped with some probability P. This situation is depicted in Figure . The approximate
relationship between P and AvgLen is shown in Figure
Note that the probability of drop increases slowly when AvgLen is between the two
thresholds, reaching MaxP at the upper threshold, at which point it jumps to unity. The
rationale behind this is that, if AvgLen reaches the upper threshold, then the gentle approach
(dropping a few packets) is not working and drastic measures are called for: dropping all
arriving packets. Some research has suggested that a smoother transition from random
dropping to complete dropping, rather than the discontinuous approach shown here, may be
appropriate.
Although Figure shows the probability of drop as a function only of AvgLen, the situation is
actually a little more complicated. In fact, P is
a function of both AvgLen and how long it has been since the last packet was dropped.
Specifically, it is computed as follows:
TempP = MaxP×(AvgLen−MinThreshold)/(MaxThreshold−MinThreshold)
P = TempP/(1−count×TempP)
TempP is the variable that is plotted on the y-axis in Figure , count keeps track of how many
newly arriving packets have been queued (not dropped), and AvgLen has been between the
two thresholds. P increases slowly as count increases, thereby making a drop increasingly
likely as the time since the last drop increases. This makes closely spaced drops relatively
less likely than widely spaced drops. This extra step in calculating P was introduced by the
inventors of RED when they observed that, without it, the packet drops were not well
distributed in time but instead tended to occur in clusters. Because packet arrivals from a
certain connection are likely to arrive in bursts, this clustering of drops is likely to cause
multiple drops in a single connection. This is not desirable, since only one drop per round-trip
time is enough to cause a connection to reduce its window size, whereas multiple drops might
send it back into slow start.
As an example, suppose that we set MaxP to 0.02 and count is initialized to zero. If the
average queue length were halfway between the two thresholds, then TempP, and the initial
value of P, would be half of MaxP, or 0.01. An arriving packet, of course, has a 99 in 100
chance of getting into the queue at this point. With each successive packet that is not
dropped, P slowly increases, and by the time 50 packets have arrived without a drop, P would
have doubled to 0.02. In the unlikely event that 99 packets arrived without loss, P reaches 1,
guaranteeing that the next packet is dropped. The important thing about this part of the
algorithm is that it ensures a roughly even distribution of drops over time.
The intent is that, if RED drops a small percentage of packets when AvgLen exceeds
MinThreshold, this will cause a few TCP connections to reduce their window sizes, which in
turn will reduce the rate at which packets arrive at the router. All going well, AvgLen will
then decrease and congestion is avoided. The queue length can be kept short, while
throughput remains high since few packets are dropped.
Note that, because RED is operating on a queue length averaged over time, it is possible for
the instantaneous queue length to be much longer than AvgLen. In this case, if a packet
arrives and there is nowhere to put it, then it will have to be dropped. When this happens,
RED is operating in tail drop mode. One of the goals of RED is to prevent tail drop behavior
if possible.
The random nature of RED confers an interesting property on the algorithm. Because RED
drops packets randomly, the probability that RED decides to drop a particular flow’s
packet(s) is roughly proportional to the share of the bandwidth that that flow is currently
getting at that router.
This is because a flow that is sending a relatively large number of packets is providing more
candidates for random dropping. Thus, there is some sense of fair resource allocation built
into RED, although it is by no means precise.
Consider the setting of the two thresholds, MinThreshold and Max- Threshold. If the traffic is
fairly bursty, then MinThreshold should be sufficiently large to allowthe link utilization to be
maintained at an acceptably high level. Also, the difference between the two thresholds
should be larger than the typical increase in the calculated average queue length in one RTT.
Setting MaxThreshold to twice MinThreshold seems to be a reasonable rule of thumb given
the traffic mix on today’s Internet. In addition, since we expect the average queue length to
hover between the two thresholds during periods of high load, there should be enough free
buffer space above MaxThreshold to absorb the natural bursts that occur in Internet traffic
without forcing the router to enter tail drop mode.
We noted above that Weight determines the time constant for the running average low-pass
filter, and this gives us a clue as to how we might pick a suitable value for it. Recall that RED
is trying to send signals to TCP flows by dropping packets during times of congestion.
Suppose that a router drops a packet from some TCP connection and then immediately
forwards some more packets from the same connection. When those packets arrive at the
receiver, it starts sending duplicate ACKs to the sender. When the sender sees enough
duplicate ACKs, it will reduce its window size. So, from the time the router drops a packet
until the time when the same router starts to see some relief from the affected connection in
terms of a reduced window size, at least one round-trip time must elapse for that connection.
There is probably not much point in having the router respond to congestion on time scales
much less than the round-trip time of the connections passing through it. As noted previously,
100 ms is not a bad estimate of average round-trip times in the Internet. Thus, Weight should
be chosen such that changes in queue length over time scales much less than 100 ms are
filtered out.
Since RED works by sending signals to TCP flows to tell them to slow down, you might
wonder what would happen if those signals are ignored.
This is often called the unresponsive flow problem, and it has been a matter of some concern
for several years. Unresponsive flows use more than their fair share of network resources and
could cause congestive collapse if there were enough of them, just as in the days before TCP
congestion control.
There is also the possibility that a variant of RED could drop more heavily from flows that
are unresponsive to the initial hints that it sends this continues to be an area of active
research.
(ii) Explain the differentiate services operation of QOS in detail. (6)
For many years, packet-switched networks have offered the promise of supporting
multimedia applications that combine audio, video, and data. After all, once digitized, audio
and video information becomes like any other formof data—a stream of bits to be
transmitted.One obstacle to the fulfillment of this promise has been the need for higher-
bandwidth links.
Recently, however, improvements in coding have reduced the bandwidth needs of audio and
video applications, while at the same time link speeds have increased. There is more to
transmitting audio and video over a network than just providing sufficient bandwidth,
however. Participants in a telephone conversation, for example, expect to be able to converse
in such a way that one person can respond to something said by the other and be heard almost
immediately. Thus, the timeliness of delivery can be very important.
We refer to applications that are sensitive to the timeliness of data as real-time applications.
Voice and video applications tend to be the canonical examples, but there are others such as
industrial control—you would like a command sent to a robot arm to reach it before the arm
crashes into something. Even file transfer applications can have timeliness constraints, such
as a requirement that a database update complete overnight before the business that needs the
data resumes on the next day.
The distinguishing characteristic of real-time applications is that they need some sort of
assurance from the network that data is likely to arrive on time (for some definition of “on
time”). Whereas a non-realtime application can use an end-to-end retransmission strategy to
make sure that data arrives correctly, such a strategy cannot provide timeliness:
Retransmission only adds to total latency if data arrives late. Timely arrival must be provided
by the network itself (the routers), not just at the network edges (the hosts). We therefore
conclude that the best-effort model, in which the network tries to deliver your data but makes
no promises and leaves the cleanup operation to the edges, is not sufficient for real-time
applications. What we need is a new service model, in which applications that need higher
assurances can ask the network for them.
The network may then respond by providing an assurance that it will do better or perhaps by
saying that it cannot promise anything better at the moment. Note that such a service model is
a superset of the current model: Applications that are happy with best-effort service should be
able to use the new service model; their requirements are just less stringent. This implies that
the network will treat some packets differently from others—something that is not done in the
best-effort model. A network that can provide these different levels of service is often said to
supportquality of service (QoS).
At this point, you might be thinking “Hold on. Doesn’t the Internet already support real-time
applications?” Most of us have tried some sort of Internet telephony application such as
Skype at this point, and it seems to work OK. The reason for this, in part, is because best-
effort service is often quite good. (Skype in particular also does a number of clever things to
try to deal with lack of QoS in the Internet.) The key word here is “often.” If you want a
service that is reliably good enough for your real-time applications, then best-effort—which
by definition makes no assurances—won’t be sufficient.We’ll return later to the topic of just
how necessary QoS really is.
15. (a)(i) Describe how SMTP transfers message from one host to another
with suitable illustration (6)
The Simple Mail Transfer Protocol (SMTP) is used to transfer electronic mail from one user
to another. This task is done by means of email client software (User Agents) the user is
using.
User Agents help the user to type and format the email and store it until internet is available.
When an email is submitted to send, the sending process is handled by Message Transfer
Agent which is n ormally comes inbuilt in email client software.
Message Transfer Agent uses SMTP to forward the email to another Message Transfer Agent
(Server side). While SMTP is used by end user to only send the emails, the Servers normally
use SMTP to send as well as receive emails. SMTP uses TCP port number 25 and 587.
Client software uses Internet Message Access Protocol (IMAP) or POP protocols to receive
Emails.
Email is one of the oldest network applications. How email works is to (1) distinguish the
user interface(i.e your mail reader) from the underlying message transfer protocol (in this
case, SMTP.
Message Format
RFC 822 defines messages to have two parts:a header and a body.Both parts are represented
in ASCII text.Originally, the body was assumed to be simple text.This is still the
case,although RFC 822 has been augmented by MIME to allow the message body to carry all
sorts of data.This data is still represented as ASCII text,but because it may be an encoded
version of,saya JPEG image,it’s not necessarily readable by human users.
More on MIME in a moment.
The message header is a series of <CRLF> terminated lines.(<CRLF> stands for carriage-
return + line-feed, which are a pair of ASCII control characters often used to indicate the end
of a line of text.) The header is separated from the message body by a blank line. Each header
line contains a type and value separated by a colon. Many of these header lines are familiar to
users since they are asked to fill them out when they compose an email message.For example
To:header identifies the message recipient ,
Subject:header says something about the purpose of the message.Other headers are filled in
by the underlying mail delivery system.Examples include
Date: (when the message was transmitted).
From: (what user sent the message),and
Received: (each mail server that handled this message).There are,of course ,many other
header lines;theinterested reader is referred to RFC 822.
These header lines describe, in various ways ,the data being carried in themessage body.They
include MIME-Version: (the version of MIME being used),Content-Description: ( a human
–readable description of what’s in the message,analogous to the Subject:line),Content-
Type:the type of data contained in the message),and Content-Transfer-Encoding(how the
message body is encoded) The second piece is definitions for a set of content types(and
subtypes).For example ,MIME defines two different still-image types, denoted image/gif and
image/jpeg,each with the obvious meaning. As another example ,text/plain refers to simple
text you might find in a vanilla 822-style message ,while text/richtext denotes a message that
contains “marked up” text (text using special fonts ,italics,etc).As a third example, MIME
defines an application type , where the subtypes correspond to the output of different
application programs(eg.,application/postscript and application/msword).
MIME also defines a multipart type that says how a message carrying more than one data
type is structured. This is like a programming language that defines both base
types(eg.,integers and floats) and compound types (eg.,. structures and arrays).One possible
multipart subtype is mixed ,which says that the message contains a set of independent data
pieces in a specified order. Each piece then has its own header line that describes the type of
that piece.
The third piece is a way to encode the various data types so they can be shipped in an ASCII
email message. The problem is that for some data types(a JPEG image, for example),any
given 8-bit byte in the image might contain one of 256 different values. Only a subset of
these values are valid ASCII characters .It is important that email messages contain only
ASCII ,because they might pass through a number of intermediate systems(gateways ,as
described below) that assume all email is ASCII and would corrupt the message if it
contained non-ASCII characters .To address this issue ,MIME uses a straightforward
encoding of binary data into the ASCII character.The encoding is called base64.The idea is to
map every three bytes of the original binary data into four ASCII characters .This is done by
grouping the binary data into 24-bit units ,and breaking each such unit into four 6-bit
pieces .Each 6-bit piece maps onto one of 64 valid ASCII character;for example ,0maps onto
A,1 maps onto B ,and so on.If you look at a message that has been encoded using the base 64
encoding scheme,you will notice only the 52 uppercase and lowercase letters ,the 10 digits
through 0 to9 ,and the special characters + and /.These are the first 64 values in the ASCII
character set.
Message Transfer:
Next we look at SMTP- the protocol used to transfer messages from one host to another. To
place SMTP in the right context, we need to identify the key players. First, users interact with
a mail reader when they compose ,file ,search, and read their email.There are countless mail
readers available ,just like there are many web browsers now include a mail reader.
Second ,there is a mail daemon running on each host. You can think of this process as
playing the role of a post office :mail readers give the daemon messages they want to send to
others users, the daemon uses SMTP running over TCP to transmit the message into a
daemon running on another machine, and the daemon puts incoming messages into the user’s
mailbox. Since SMTP is a protocol that anyone could implement , in theory there could be
many different implementations of the mail daemon. It runs out, though that the mail daemon
running on most hosts is derived from the sendmail program originally implemented on
berkelyunix.
While it is certainly possible that the sendmail program on a sender’s machine establishes an
SMTP/TCP connection to the sendmail program on the recipient’s machine, in many cases
the mail traverses one or more mail gateways on its route from the sender’s host to the
receiver’s host. Like the end hosts, these gateways also run a send-mail process. It’s not an
accident that these intermediate nodes are called “gateways” since their job is to store and
forward email messages.
Mail Reader:
The final step isfor the user to actually receive her messages from the mail box, read
them ,reply to them, and possibly save a copy for future reference .The user performs all the
actions by interacting with a mail reader. In many cases ,this reader is just a program running
on the same machine as the user’s mailbox resides, in which case it simply reads and writes
the file that implements the mailbox .In other cases ,the user accesses her mailbox from a
remote machine using yet another protocol, such as the Post Office Protocol(POP) or the
Internet Message Access Control(IMAP).It is beyond the scope of this book to discuss the
user interface aspects of the mail reader but it is definitely within our scope to talk about the
access protocol. We consider IMAP, in particular.
IMAP is similar to SMTP in many ways .It is a client/server protocol running over TCP,
where the client (running on the user’s desktop machine) issues commands in the form of
<CRLF> terminated ASCII text lines and the mail server(running on the machine that
maintains the user’s mailbox) responds in-kind. The exchange begins with the client
authenticating herself, and identifying the mailbox she wants to access. This can be
represented by the simple state transaction diagram shown in the figure. In this diagram,
LOGIN, AUTHENTICATE, SELECT, EXAMINE, CLOSE and LOGOUT are example
commands that the client can issue, while OK is one possible server response.
Other common commands include FETCH, STORE, DELETE, and EXPUNGE, with the
obvious meanings. Additional server responds include NO (client does not have permission
to perform that operation) and BAD (command is ill-formed).
When the user asks to FETCH a message, the server returns it in MIME format and the mail
reader decodes it. In addition to the message itself, IMAP also defines a set of message
attributes that are exchanged as part of other commands, independent of transferring the
message itself. Message attributes include information like the size of the message, but more
interestingly, various flags associated with a message, such as Seen, Answered, Deleted and
Recent. These flags are used to keep the client and server synchronized, thatis, when the user
deletes a message in the mail reader, the client needs to report this fact to the mail server.
Later, should the user decide to expunge all deleted messages, the client issues an EXPUNGE
command to the server, which knows to actually remove all earlier deleted messages from the
mail box.
Finally, note that when the user replies to a message,or sends a new message, the mail reader
does not forward the message from the client to the mail server using IMAP ,but it instead
uses SMTP .This means that the user’s mail server is effectively the first mail gateway
traversed along the path from the desktop to the recipient’s mail box.
(ii) Explain IMAP with its state transition diagram (7)
IMAP is similar to SMTP in many ways. It is a client/server protocol running over TCP,
where the client (running on the user’s desktop machine) issues commands in the form of
<CRLF>-terminated ASCII text lines and the mail server (running on the machine that
maintains the user’s mailbox) responds in kind. The exchange begins with the client
authenticating him- or herself and identifying the mailbox he or she
wants to access. This can be represented by the simple state transition diagram shown in
Figure 9.2. In this diagram, LOGIN, AUTHENTICATE, SELECT, EXAMINE, CLOSE, and
LOGOUT are example commands that the client can issue, while OK is one possible server
response. Other common commands include FETCH, STORE, DELETE, and EXPUNGE,
with the obvious meanings. Additional server zresponses include NO (client does not have
permission to perfo rmthat operation) and BAD (command is ill formed).
When the user asks to FETCH a message, the server returns it in MIME format and the mail
reader decodes it. In addition to the message itself, IMAP also defines a set of message
attributes that are exchanged as part of other commands, independent of transferring the
message itself. Message attributes include information like the size of the message and, more
interestingly, various flags associated with the message (e.g., Seen, Answered, Deleted, and
Recent). These flags are used to keep the client and server synchronized; that is, when the
user deletes a message in the mail reader, the client needs to report this fact to the mail server.
Later, should the user decide to expunge all deleted messages, the client issues an EXPUNGE
command to the server, which knows to actually remove allearlier deleted messages from the
mailbox.
Finally, note that when the user replies to a message, or sends a new message, the mail reader
does not forward the message from the client to the mail server using IMAP, but it instead
uses SMTP. This means that the user’s mail server is effectively the first mail gateway
traversed along the path from the desktop to the recipient’s mailbox
(or)
(b)List the elements of networks management and explain in operation of SNMP
protocol in detail (8)
A large network can often get into various kinds of trouble due to routers (dropping too manypackets), hosts( going down) etc. One has to keep track of all these occurrence and adapt to such situations. A protocol has been defined. Under this scheme all entities in the network belong to 4classes:1. Managed Nodes2. Management Stations3. Management Information (called Object)
4. A management protocolWhat we have just described is the problem of network management, an issue that pervades the entire network architecture. Since the nodes we want to keep track of are distributed, our only real option is to use the network to manage the network. This means we need a protocol that allows us to read, and possibly write, various pieces of state information on differentnetwork nodes. The most widely used protocol for this purpose is the Simple NetworkManagement Protocol (SNMP). SNMP is essentially a specialized request/reply protocol that supports two kinds of request messages: GET and SET. The former is used to retrieve a piece of state from some node, and the latter is used to store a new piece of state in some node. (SNMP also supports a third operation, GET-NEXT, which we explain below.) The following discussion focuses on the GET operation, since it is the one most frequently used.SNMP is used in the obvious way. A system administrator interacts with a client program that displays information about the network. This client program usually has a graphical interface. You can think of this interface as playing the same role as a web browser. Whenever the administrator selects a certain piece of information that he or she wants to see, the client program uses SNMP to request that information from the node in question. (SNMP runs on top of UDP.) An SNMP server running on that node receives the request, locates the appropriate piece of information, and returns it to the client program, which then displays it to the user.There is only one complication to this otherwise simple scenario:Exactly how does the client indicate which piece of information it wants to retrieve, and, likewise, how does the server know which variable in memory to read to satisfy the request? The answer is that SNMP depends on a companion specification called the management information base (MIB). The MIB defines the specific pieces of information—the MIBvariables—that you can retrieve from a network node.The current version of MIB, called MIB-II, organizes variables into 10 different groups. You will recognize that most of the groups correspond to one of the protocols described in this book, and nearly all of the variables defined for each group should look familiar. For example: System—General parameters of the system (node) as a whole,including where the node is located, how long it has been up, and the system’s nameInterfaces—Information about all the network interfaces (adaptors) attached to this node, such as the physical address of each interface and how many packets have been sent and received on each interface Address translation—Information about the Address ResolutionProtocol, and in particular, the contents of its address translation tableIP—Variables related to IP, including its routing table, how many datagrams it has successfully forwarded, and statistics about datagram reassembly; includes counts of how many times IP drops a datagram for one reason or another TCP—Information about TCP connections, such as the number of passive and active opens, the number of resets, the number of timeouts, default timeout settings, and so on; per-connection information persists only as long as the connection existsUDP—Information about UDP traffic, including the total number of UDP datagrams that have been sent and received.There are also groups for Internet Control Message Protocol (ICMP), Exterior Gateway Protocol (EGP), and SNMP itself. The tenth group is used by different media.Returning to the issue of the client stating exactly what information it wants to retrieve froma node, having a list of MIB variables is only half the battle. Two problems remain. First, we
need a precise syntax for the client to use to state which of the MIB variables it wants to fetch. Second, we need a precise representation for the values returned by the server. Bothproblems are addressed using Abstract Syntax Notation One (ASN.1).Consider the second problem first. ASN.1/Basic Encoding Rules (BER) defines a representation for different data types, such as integers. The MIB defines the type of each variable, and then it uses ASN.1/BER to encode the value contained in this variable as it is transmitted over the network. As far as the first problem is concerned, ASN.1 also defines an object identification scheme; The MIB uses this identification system to assign a globally unique identifier to each MIB variable. These identifiers are given in a “dot” notation, not unlike domain names. For example, 1.3.6.1.2.1.4.3 is the unique ASN.1 identifier for the IP-related MIB variable ipInReceives; this variable counts the number of IP datagrams that have been received by this node. In this example, the 1.3.6.1.2.1 prefix identifies the MIB database (remember, ASN.1 object IDs are for all possible objects in the world), the 4 corresponds to the IP group, and the final 3 denotes the third variable in this group.Thus, network management works as follows. The SNMP client puts the ASN.1 identifier for the MIB variable it wants to get into the request message, and it sends this message to the server. The server then maps this identifier into a local variable (i.e., into a memory location where the value for this variable is stored), retrieves the current value held in this variable, and uses ASN.1/BER to encode the value it sends back to the client.There is one final detail. Many of the MIB variables are either tables or structures. Such compound variables explain the reason for the SNMP GET-NEXT operation. This operation, when applied to a particular variable ID, returns the value of that variable plus the ID of the next variable, for example, the next item in the table or the next field in the structure.This aids the client in “walking through” the elements of a table or structure.
(ii) Discuss the functions performed by of DNS.Give example (5)
We have been using address to identify hosts. While perfectly suited for processing by
routers, addresses are not exactly user friendly. It is for this reason that a unique name is also
typically assigned to each host in a network.
A naming service can be developed to map user-friendly names into router friendly
addresses. Name services are some times called middleware because they fill a gap between
applications and the underlying network.
Host names differ from host addresses in two important ways. First, they are usually of
variable length and mnemonic, thereby making them easier for humans to remember. (In
contrast, fixed-length numeric addresses are easier for routers to process).Second, names
typically contain no information that helps the network locate (route packets toward) the host.
Addresses, in contrast, sometimes have routing information embedded in them; flat addresses
(those not divisible into component parts) are the exception.
A namespace defines the set of possible names. A namespace can be either flat (names are
not divisible into components), or it can be hierarchical. The naming system maintains a
collection of bindings of names to values. The value can be anything we want the naming
system to return when presented with a name; in many cases it is an address.
A resolution mechanism is a procedure that, when invoked with a name, returns the
corresponding value. A name server is a specific implementation of a resolution mechanism
that is available on a network and that can be queried by sending it a message.
DNS employs a hierarchical namespace rather than a flat namespace, and the “table” of
bindings that implements this namespace is partitioned into disjoint pieces and distributed
throughout the Internet. These sub tables are made available in name servers that can be
queried over the network.
What happens in the Internet is that a user presents a host name to an application program,
and this program encages the naming system to translate this name into a host address. The
application then opens a connection to this host by presenting some transport protocol with
the host’s IP address.
Recursive resolution
Iterative resolution
DOMAIN HIERARCHY:
DNS names are processed from right to left and use periods as the separator. An example
domain name for a host is cicada.cs.princeton.edu. There are domains for each country, plus
the “big six” domains: .edu, .com, .gov, .mil, .org, and .net.
NAME SERVERS:
The first step is to partition the hierarchy into sub trees called zones. Each zone can be
thought of as corresponding to some administrative authority that is responsible for that
portion of the hierarchy.
Within this zone, some departments is a zone want the responsibility of managing the
hierarchy (and so they remain in the university-level zone), while others, like the Department
of Computer science, manage their own department-level zone.
The relevance of a zone is that it corresponds to the fundamental unit of implementation in
DNS-the name server. Specifically, the information contained in each zone is implemented in
two or more name servers.
Each name server, in turn, is a program that can be accessed over the Internet. Clients send
queries to name servers, and name servers respond with the requested information.
Sometimes the response contains the final answer that the client wants, and sometimes the
response contains a pointer to another that the client should query next.
Each name server implements the zone information as a collection of resource records. In
essence, a resource record is a name-to-value binding, or more specifically, a 5-tuple that
contains the following fields:
< Name, Value, Type, Class, TTL >
The Name and Value fields are exactly what you would expect, while the Type field specifies
how the Value should be interpreted. For example, Type=A indicates that the Value is in IP
address. Thus, A records implement the name-to-address mapping we have been assuming.
Other record types include
NS: The Value field gives the domain name for a host is running a name server that
knows how to resolve names within the specified domain.
CNAME: the Value field gives the canonical name for a particular host; it is used to
define aliases.
MX: The Value field gives the domain name for a host that is running a mail server
that accepts the messages for the specified domain.
The Class field was included to allow entities other than the NIC to define useful record
types. To date, the only widely used Class is the one used by the Internet; it is denoted IN.
Finally, the TTL field shows how long this resource record is valid. It is used by servers that
cache resource records from other servers; when the TTL expires, the server must evict the
record from its cache.
PART C-(1X15= 15 marks)
16. (a)(i) Draw the format of TCP packet leader and explain each of its field
(10)
A TCP segment consists of a TCP header, TCP options and the data that the segment
transports. Following table represents a TCP header and it’s fields. The fields of TCP header
are source port and destination port of a application, 32 bit sequence number, 32 bit
acknowledgement number, TCP header length, 6 reserved bits, 6 flags (URG, ACK, PSH,
RST, SYS, FIN), 16 bit window size, 16 bit TCP checksum, 16 bit urgent pointer, Options if
any and padding if needed. Minimum header length is 20 bytes long i.e 5 32 bit words. The
TCP maximum header size can be up to 15, 32 bit words. i.e. 60 byte long.
Source and Destination Ports:
Both source and destination ports are 16-bit fields identify protocol ports of the sending and
receiving applications. The source and destination port numbers plus source and destination
IP addresses in the IP header combine to uniquely identify each TCP connection referred as a
socket.
Sequence Number:
Sequence number is a 32-bit wide field identifies the first byte of data in the data area of the
TCP segment. We can identify every byte in a data stream by a sequence number.
Acknowledgement Number:
Acknowledge number is also a 32-bit wide field which identifies the next byte of data that the
connection expects to receive from the data stream.
Header Length:
Header length is a field which consists of 4 bit to specifies the length of the TCP header in
32-bit words. Receiving TCP module can calculate the start of the data area by examining the
header length field.
Flags:
URG – URG flag tells the receiving TCP module as it is urgent data
ACK – ACK tells the receiving TCP module that the acknowledge number field contains a
valid acknowledgement number
PSH – PSH flag tells the receiving TCP module to immediately send data to the destination
application
RST – RST flag asks the receiving TCP module to reset the TCP connection
SYN – SYN flag tells the receiving TCP module to synchronize sequence number
FIN – FIN flag tells the receiving TCP module that the sender has finished sending data
Window Size:
Window size field is a 16-bit wide which tells the receiving TCP module the number of bytes
that the sending end id willing to accept. The value in this field specifies the width of the
sliding window.
Checksum:
TCP checksum is a 16-bit wide filed includes the TCP data in it’s calculations. This field
helps the receiving TCP module to detect data corruption. That is, TCP requires the sending
TCP module to calculate and include checksums in this field and receiving TCP module to
verify checksums when they receive data. The data corruption is detected in this way.
Urgent pointer:
Urgent pointer is a 16-bit wide field specifies a byte location in the TCP data area. It points to
the last byte of urgent data in the TCP data area.
Options:
TCP header includes an optional options field like the IP header. During the initial
negotiations between two ends of a TCP connection, this field is used with maximum
segment size option, which advertise the largest segment that the TCP module expects to
receive.
(ii) Specify the justification for having variable field lengths for the fields in the
TCP header. (5)
(or)
(b) Illustrate the sequence of events and the respective protocols involved while
accessing a web page from a machine when it is connected with internet for first
time (15)
So your computer is connected to the Internet and has a unique address. How does it 'talk' to
other computers connected to the Internet? An example should serve here: Let's say your IP
address is 1.2.3.4 and you want to send a message to the computer 5.6.7.8. The message you
want to send is "Hello computer 5.6.7.8!". Obviously, the message must be transmitted over
whatever kind of wire connects your computer to the Internet. Let's say you've dialed into
your ISP from home and the message must be transmitted over the phone line. Therefore the
message must be translated from alphabetic text into electronic signals, transmitted over the
Internet, then translated back into alphabetic text. How is this accomplished? Through the
use of a protocol stack. Every computer needs one to communicate on the Internet and it is
usually built into the computer's operating system (i.e. Windows, Unix, etc.). The protocol
stack used on the Internet is referred to as the TCP/IP protocol stack because of the two
major communication protocols used. The TCP/IP stack looks like this:
Protocol Layer Comments
Application Protocols Layer Protocols specific to applications such as WWW, e-mail, FTP, etc.
Transmission Control Protocol Layer
TCP directs packets to a specific application on a computer using a port number.
Internet Protocol Layer IP directs packets to a specific computer using an IP address.
Hardware LayerConverts binary packet data to network signals and back.(E.g. ethernet network card, modem for phone lines, etc.)
If we were to follow the path that the message "Hello computer 5.6.7.8!" took from our
computer to the computer with IP address 5.6.7.8, it would happen something like this:
1. The message would start at the top of the protocol stack on your computer and work
it's way downward.
2. If the message to be sent is long, each stack layer that the message passes through
may break the message up into smaller chunks of data. This is because data sent over the
Internet (and most computer networks) are sent in manageable chunks. On the Internet,
these chunks of data are known as packets.
3. The packets would go through the Application Layer and continue to the TCP layer.
Each packet is assigned a port number. Ports will be explained later, but suffice to say that
many programs may be using the TCP/IP stack and sending messages. We need to know
which program on the destination computer needs to receive the message because it will be
listening on a specific port.
4. After going through the TCP layer, the packets proceed to the IP layer. This is where
each packet receives it's destination address, 5.6.7.8.
5. Now that our message packets have a port number and an IP address, they are ready
to be sent over the Internet. The hardware layer takes care of turning our packets containing
the alphabetic text of our message into electronic signals and transmitting them over the
phone line.
6. On the other end of the phone line your ISP has a direct connection to the Internet.
The ISPs router examines the destination address in each packet and determines where to
send it. Often, the packet's next stop is another router. More on routers and Internet
infrastructure later.
7. Eventually, the packets reach computer 5.6.7.8. Here, the packets start at the bottom
of the destination computer's TCP/IP stack and work upwards.
As the packets go upwards through the stack, all routing data that the sending computer's
stack added (such as IP address and port number) is stripped from the packets.
When the data reaches the top of the stack, the packets have been re-assembled into their
original form, "Hello computer 5.6.7.8!"