Let us now turn from the OSI reference model to the reference model used in the grand parent of all computer networks, the ARPANET, and its successor, the worldwide Internet.
The ARPANET was a research network sponsored by the US Department of Defense. It eventually connected hundreds of universities and government installations using leased telephone lines. When satellite and radio networks were added later, the existing protocols had trouble inter-working with them, so a new reference architecture was needed.
Thus the ability to connect multiple networks together in a seamless way was one of the major design goals from the very beginning. This architecture later became known as the TCP/IP Reference Model, after its two primary protocols.
The primary objective behind the ARPANET (which subsequently became TCP/IP) was to have a network where even if some of its precious hosts, routers, and internetwork gateways get blown to pieces (say during war situation) the existing conversations should not be broken off. In other words, the connections should remain intact as long as the source and destination machines were functioning, even if some of the machines or transmission lines in between were suddenly put out of operation. Furthermore, a flexible architecture was needed, since applications with divergent requirements were envisioned, ranging from transferring files to real-time speech transmission.
The figure below shows how TCP/IP reference model compares with the OSI reference model.
All these requirements led to the choice of a packet-switching network based on a connection less internetwork layer. This layer, called the internet layer, is the key layer that holds the whole architecture together. Its job is to permit hosts to inject packets into any network and have them travel independently to the destination (potentially on a different network). They may even arrive in a different order than they were sent, in which case it is the job of higher layers to re-arrange them, if in-order delivery is desired. Note that "internet" is used here in a generic sense, even though this layer is present in the Internet.
The analogy here is with the (snail) mail system. A person can drop a sequence of international letters into a mail box in one country, and with a little luck, most of them will be delivered to the correct address in the destination country. Probably the letters will travel through one or more international mail gateways along the way, but this is transparent to the users. Further, the fact that each country (i.e., each network) has its own stamps, preferred envelope sizes, and delivery rules is hidden from the users.
The internet layer defines an official packet format and protocol called IP (Internet Protocol). The job of the internet layer is to deliver IP packets where they are supposed to go. Packet routing is clearly the major issue here, as is avoiding congestion. For these reasons, it is reasonable to say that the TCP/IP internet layer is very similar in functionality to the OSI network layer.
The layer above the internet layer in the TCP/IP model is called the Transport layer. It is designed to allow peer entities on the source and destination hosts to carry on a conversation, the same as in the OSI Transport layer.
Two end-to-end protocols have been defined here. The first one, TCP (Transmission Control Protocol) is a reliable connection-oriented protocol that allows a byte stream originating on one machine to be delivered without error on any other machine in the internet. It fragments the incoming byte stream into discrete messages and passes each one onto the internet layer. At the destination, the receiving TCP process reassembles the received messages into the output stream. TCP also handles flow control to make sure a fast sender cannot swamp a slow receiver with more messages than it can handle.
The second protocol in this layer, UDP (User Datagram Protocol), is an unreliable, connectionless protocol for applications that do not want TCP's sequencing or flow control and wish to provide their own. It is also widely used for one-shot, client-server type request-reply queries and applications in which prompt delivery is more important than accurate delivery, such as transmitting speech or video.
The TCP/IP model does not have session and presentation layers. No need for them was perceived, so they were not included. Experience with the OSI model has proven this view correct: they are of little use to most applications.
On top of the Transport layer is the Application layer. It contains all the higher-level protocols. The early ones included virtual terminal (TELNET), file transfer (FTP), and electronic mail (SMTP).
The virtual terminal protocol allows a user on one machine to log into a distant machine and work there.
The file transfer protocol provides a way to move data efficiently from one machine to another.
Electronic mail was originally just a kind of file transfer, but later a specialized protocol was developed for it.
Many other protocols have been added to these over the years, such as the Domain Name Service (DNS) for mapping host names onto their network addresses, NNTP, the protocol used for moving news articles around, and HTTP, the protocol used for fetching pages on the World Wide Web, and many others.
Below the internet layer is a great void. The TCP/IP reference model does not really say much about what happens here, except to point out that the host has to connect to the network using some protocol so it can send IP packets over it. This protocol is not defined and varies from host to host and network to network.
Types of Network Communication Services
How to move your Email accounts from one hosting provider to another without losing any mails?
How to resolve the issue of receiving same email message multiple times when using Outlook?
Self Referential Data Structure in C - create a singly linked list
Mosquito Demystified - interesting facts about mosquitoes
Elements of the C Language - Identifiers, Keywords, Data types and Data objects
How to pass Structure as a parameter to a function in C?
Rajeev Kumar is the primary author of How2Lab. He is a B.Tech. from IIT Kanpur with several years of experience in IT education and Software development. He has taught a wide spectrum of people including fresh young talents, students of premier engineering colleges & management institutes, and IT professionals.
Rajeev has founded Computer Solutions & Web Services Worldwide. He has hands-on experience of building variety of websites and business applications, that include - SaaS based erp & e-commerce systems, and cloud deployed operations management software for health-care, manufacturing and other industries.