Friday, 14 November 2014

Review about article TCP/IP

           Based on "Understanding TCP/IP Network Stack & Writting Network Apps" article,we will be able to know about Transmission Control Protocol/Internet Protocol .  Internet service will become useless without TCP/IP. Nowadays, all Internet services we have developed and used at NHN are based on solid basis,TCP/IP. This article gave a lot of informations about overall operation scheme of the network stack based on data flow and and control flow in Linux OS and the hardware layer.

            First of all, I want to define each of the components, Based on "Understanding TCP/IP Network Stack & Writting Network Apps" article,we will be able to know about Transmission Control Protocol/Internet Protocol . We cannot imagine Internet service without TCP/IP. Nowadays, all Internet services we have developed and used at NHN are based on solid basis,TCP/IP. This article gave a lot of informations about overall operation scheme of the network stack based on data flow and and control flow in Linux OS and the hardware layer.

First of all, I want to define each of the components. The Transmission Control Protocol/Internet Protocol (TCP/IP) suite was created by the U.S. Department of Defense There is the TCP Control Block (TCB) structure connected to the socket. The TCB includes data required for processing the TCP connection. Data in the TCB are connection state (DoD) to ensure that communications could survive any conditions and that data integrity wouldn’t be compromised under malicious attacks. TCP is the connection-oriented in the sense that prior to transmission end points need to establish a connection first. The data units of TCP protocol are segments which has a fixed 20-byte header followed by a variable size data field. A stream of bytes can be breaking down by TCP into segments and reconnecting them at the other end, retransmitting data that has been lost and it also organizing the segments in the correct order. Next , The Internet Protocol (IP) is the principal communications protocol in the Internet protocol suite for relaying datagrams across network boundaries. Its routing function enables internetworking, and essentially establishes the Internet .

Usually, since TCP and IP have different layer structures, it would be correct to describe them separately. However, here I  will describe them as one. First is connection-oriented. A connection is made between two endpoints (local and remote) and then data is transferred. Here, the "TCP connection identifier" is a combination of addresses of the two endpoints, having<local IP address, local port number, remote IP address, remote port number> type. Next is Bidirectiona; Byte Stream. Bidirectional data communication is made by using byte stream. Then, In-Order Delivery. A receiver receives data in the order of sending data from a sender. For that, the order of data is required. To mark the order, 32-bit integer data type is used. Next , Reliability through ACK. When a sender did not receive acknowledgement  from a receiver after sending data to the receiver, the sender TCP re-sends the data to the receiver. Therefore, the sender TCP buffers unacknowledged data from the receiver. Next is Flow Control. A sender sends as much data as a receiver can afford. A receiver sends the maximum number of bytes that it can receive (unused buffer size, receive window) to the sender. The sender sends as much data as the size of bytes that the receiver's receive window allows. Next , Congestion Control. The congestion window is used separately from the receive window to prevent network congestion by limiting the volume of data flowing in the network. Like the receive window, the sender sends as much data as the size of bytes that the receiver's congestion window allows by using a variety of algorithms such as TCP Vegas, Westwood, BIC, and CUBIC. Different from flow control, congestion control is implemented by the sender only. 

                Network stack has many layer but the layer can be classified into three areas that are User area, Kernel area and device area. The tasks in the user area and kernel area are using CPU. They are called “host”. When the write system call is called, the data in the user area is copied to the kernel memory and then added to the end of the send socket buffer. This is to send data in order. In the Figure 1, the light-gray box refers to the data in the socket buffer. Then, TCP is called.  It is functional to distinguish them from from the device area. There is the TCP Control Block (TCB) structure connected to the socket. The TCB includes data required for processing the TCP connection. There are two TCP segments , TCP header and data.


Description: Description: http://www.cubrid.org/files/attach/images/220547/523/594/operation_process_by_each_layer_of_tcp_ip_for_data_received.png
The figure above shows operation process by each layer of TCP/IP network stacks for handing data received. First, the NIC writes the packet onto its memory. It checks whether the packet is valid by performing the CRC check and then sends the packet to the memory buffer of the host. This buffer is a memory that has already been requested by the driver to the kernel and allocated for receiving packets. After the buffer has been allocated, the driver tells the memory address and size to the NIC. When there is no host memory buffer allocated by the driver even though the NIC receives a packet, the NIC may drop the packet.
After sending the packet to the host memory buffer, the NIC sends an interrupt to the host OS.
Then, the driver checks whether it can handle the new packet or not. So far, the driver-NIC communication protocol defined by the manufacturer is used.
When the driver should send a packet to the upper layer, the packet must be wrapped in a packet structure that the OS uses for the OS to understand the packet. For example, sk_buff of Linux, mbuf of BSD-series kernel, and NET_BUFFER_LIST of Microsoft Windows are the packet structures of the corresponding OS. The driver sends the wrapped packets to the upper layer.
The Ethernet layer checks whether the packet is valid and then de-multiplexes the upper protocol (network protocol). At this time, it uses the ethertype value of the Ethernet header. The IPv4 ethertype value is 0x0800. It removes the Ethernet header and then sends the packet to the IP layer.
The IP layer also checks whether the packet is valid. In other words, it checks the IP header checksum. It logically determines whether it should perform IP routing and make the local system handle the packet, or send the packet to the other system. If the packet must be handled by the local system, the IP layer de-multiplexes the upper protocol (transport protocol) by referring to the proto value of the IP header. The TCP proto value is 6. It removes the IP header and then sends the packet to the TCP layer.
Like the lower layer, the TCP layer checks whether the packet is valid. It also checks the TCP checksum. As mentioned before, since the current network stack uses the checksum offload, the TCP checksum is computed by NIC, not by the kernel. The size of the receive socket buffer is the TCP receive window. To a certain point, the TCP throughput increases when the receive window is large. In the past, the socket buffer size had been adjusted on the application or the OS configuration. The latest network stack has a function to adjust the receive socket buffer size, i.e., the receive window, automatically.
When the application calls the read system call, the area is changed to the kernel area and the data in the socket buffer is copied to the memory in the user area. The copied data is removed from the socket buffer. And then the TCP is called. The TCP increases the receive window because there is new space in the socket buffer. And it sends a packet according to the protocol status. If no packet is transferred, the system call is terminated.
This article is really helpful for those who want to develop network programs, execute performance test, and perform troubleshooting.


TCP/IP third Article

Key Characteristics of TCP/IP

How should I design a network protocol to transmit data quickly while keeping the data order without any data loss? TCP/IP has been designed with this consideration. The following are the key characteristics of TCP/IP required to understand the concept of the stack.
TCP and IP
Technically, since TCP and IP have different layer structures, it would be correct to describe them separately. However, here we will describe them as one.

1. CONNECTION-ORIENTED

First, a connection is made between two endpoints (local and remote) and then data is transferred. Here, the "TCP connection identifier" is a combination of addresses of the two endpoints, having<local IP address, local port number, remote IP address, remote port number> type.

2. BIDIRECTIONAL BYTE STREAM

Bidirectional data communication is made by using byte stream.

3. IN-ORDER DELIVERY

A receiver receives data in the order of sending data from a sender. For that, the order of data is required. To mark the order, 32-bit integer data type is used.

4. RELIABILITY THROUGH ACK

When a sender did not receive ACK (acknowledgement) from a receiver after sending data to the receiver, the sender TCP re-sends the data to the receiver. Therefore, the sender TCP buffers unacknowledged data from the receiver.

5. FLOW CONTROL

A sender sends as much data as a receiver can afford. A receiver sends the maximum number of bytes that it can receive (unused buffer sizereceive window) to the sender. The sender sends as much data as the size of bytes that the receiver's receive window allows.

6. CONGESTION CONTROL

The congestion window is used separately from the receive window to prevent network congestion by limiting the volume of data flowing in the network. Like the receive window, the sender sends as much data as the size of bytes that the receiver's congestion window allows by using a variety of algorithms such as TCP Vegas, Westwood, BIC, and CUBIC. Different from flow control, congestion control is implemented by the sender only.

Data Transmission

As indicated by its name, a network stack has many layers. The following Figure 1 shows the layer types.
operation_process_by_each_layer_of_tcp_ip.png
Figure 1: Operation Process by Each Layer of TCP/IP Network Stack for Data Transmission.

TCP/IP Second Article

October 29, 2007

Networking Basics: TCP, UDP, TCP/IP and OSI Model

By 
The Transmission Control Protocol/Internet Protocol (TCP/IP) suite was created by the U.S. Department of Defense (DoD) to ensure that communications could survive any conditions and that data integrity wouldn’t be compromised under malicious attacks.
The Open Systems Interconnection Basic Reference Model (OSI Model) is an abstract description for network protocol design, developed as an effort to standardize networking.
In this article, I will present the differences between the DoD and the OSI models and then provide details about the DoD’s version of TCP/IP. I will also describe the protocols used at the various layers of the DoD model and provide you with the details of TCP and UDP protocols. Throughout this article you will find useful information concerning the protocol suite of the century: TCP/IP.
So if you’re preparing for your CCENT or CCNA exams, or if you’re just interested in networking, this is one article you don’t want to miss! Fasten your seat belts and have a good ride!

TCP/IP and the OSI Model Comparison

Let’s Start by Comparing TCP/IP and the OSI Models. The TCP/IP model is basically a shorter version of the OSI model. It consists of four instead of seven layers. Despite their architectural differences, both models have interchangeable transport and network layers and their operation is based upon packet-switched technology. The diagram below indicates the differences between the two models:
TCP/IP and OSI Models
  • Application Layer: The Application layer deals with representation, encoding and dialog control issues. All these issues are combined together and form a single layer in the TCP/IP model whereas three distinctive layers are defined in the OSI model.
  • Host-to-Host: Host-to-Host protocol in the TCP/IP model provides more or less the same services with its equivalent Transport protocol in the OSI model. Its responsibilities include application data segmentation, transmission reliability, flow and error control.
  • Internet: Again Internet layer in TCP/IP model provides the same services as the OSIs Network layer. Their purpose is to route packets to their destination independent of the path taken.
  • Network Access: The network access layer deals with all the physical issues concerning data termination on network media. It includes all the concepts of the data link and physical layers of the OSI model for both LAN and WAN media.
The diagram below shows clearly the way TCP/IP protocol suite relates to the TCP/IP model.
TCP/IP Protocol Suite 2

Host-to-Host Layer Protocols

Two protocols: Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) are defined for transmitting datagrams. We will look at the details of both these protocols as well as their interaction with the upper layer.

Transmission Control Protocol (TCP)

TCP is connection-oriented in the sense that prior to transmission end points need to establish a connection first. TCP protocol data units are called segments. The sending and receiving TCP entities exchange data in the form of segments, which consist of a fixed 20-byte header followed by a variable size data field.
TCP is responsible for breaking down a stream of bytes into segments and reconnecting them at the other end, retransmitting whatever might be lost and also organizing the segments in the correct order. The segment size is restricted by the maximum transfer unit (MTU) of the underlying link layer technology (MTU is generally 1500 bytes which is the maximum payload size of the Ethernet).
The image below shows the TCP segment format. The most important fields are explained further on.
TCP Segment Format 3
  • Source Port and Destination Port fields together identify the two local end points of the particular connection. A port plus its hosts’ IP address forms a unique end point. Ports are used to communicate with the upper layer and distinguish different application sessions on the host.
  • The Sequence Number and Acknowledgment Number fields specify bytes in the byte stream. The sequence number is used for segment differentiation and is useful for reordering or retransmitting lost segments. The Acknowledgment number is set to the next segment expected.
  • Data offset or TCP header length indicates how many 4-byte words are contained in the TCP header.
  • The Window field indicates how many bytes can be transmitted before an acknowledgment is received.
  • The Checksum field is used to provide extra reliability and security to the TCP segment.
  • The actual user data are included after the end of the header.
Let’s have a look at how a TCP segment is captured by Ethereal network analyzer. The image below shows a request-response message sequence carried over TCP. Notice the fields discussed above: Source Port, Destination Port, Sequence number, Acknowledgement number, Window size and checksum.
TCP Ethereal 4

User Datagram Protocol (UDP)

UDP protocol consists of fewer fields compared to TCP. The reason for that is because certain data types do not require reliable delivery and extra overhead. Real-time traffic for example, needs to be transported in an efficient way without error correction and retransmission mechanisms.
UDP is considered to be a connectionless protocol. It leaves reliability to be handled by the application layer. All it cares about is fast transmission. The UDP segment format is presented in the diagram below:
UDP Segment Format 5
Let’s see how a UDP segment is captured by Ethereal. Notice the small header size.
UDP Ethereal 6

Which One Should You Use?

Choosing the right transport protocol to use depends on the type of data to be transferred. For information that needs reliability, sequence transmission and data integrity — TCP is the transport protocol to use. For data that require real-time transmission with low overhead and less processing — UDP is the right choice.
The following table summarizes the key-characteristics of each one of these protocols. Keep them in mind when choosing the transport protocol for your data.
TCP vs UDP 7

About the Author

 (CCNA, NET+, MOUS) holds a BSc in Electronic Engineering and an MSc in Communication Networks. He has over three years of experience in teaching MS Office applications, networking courses and GCE courses in Information Technology. Stelios is currently working as a VoIP Engineer in a Telecom company, where he uses his knowledge in practice. He has successfully completed training on CCNP topics, Linux and IMS. His enthusiasm, ambition and knowledge motivate him to offer his best. Stelios has written many articles covering Cisco CCENT, CCNA, and CCNP.

Discussion

Monday, 27 October 2014

Article About TCP/IP

hari nie... rasa nak share tentang TCP/IP... rupanya banyak lagi ni yang nak kena belajar.... ni antara article yang berkaitan TCP/IP.... credit to Sir Hafifi.....

InfiniBand is a network communications protocol that offers a switch-based fabric of point-to-point bi-directional serial links between processor nodes, as well as between processor nodes and input/output nodes, such as disks or storage. Every link has exactly one device connected to each end of the link, such that the characteristics controlling the transmission (sending and receiving) at each end are well defined and controlled.
InfiniBand creates a private, protected channel directly between the nodes via switches, and facilitates data and message movement without CPU involvement with Remote Direct Memory Access (RDMA) and Send/Receive offloads that are managed and performed by InfiniBand adapters. The adapters are connected on one end to the CPU over a PCI Express interface and to the InfiniBand subnet through InfiniBand network ports on the other. This provides distinct advantages over other network communications protocols, including higher bandwidth, lower latency, and enhanced scalability.
Figure 1: Basic InfiniBand ArchitectureFigure 1: Basic InfiniBand Architecture
The InfiniBand Trade Association (IBTA), established in 1999, chartered, maintains, and furthers the InfiniBand specification, and is responsible for compliance and interoperability testing of commercial InfiniBand products. Through its roadmap, the IBTA has pushed the development of higher performance more aggressively than any other interconnect solution, ensuring an architecture that is designed for the 21st century.
InfiniBand is designed to enable the most efficient data center implementation. It natively supports server virtualization, overlay networks, and Software-Defined Networking (SDN). InfiniBand takes an application-centric approach to messaging, finding the path of least resistance to deliver data from one point to another. This differs from traditional network protocols (such as TCP/IP and Fibre Channel), which use a more network-centric method for communicating.
Direct access means that an application does not rely on the operating system to deliver a message. In traditional interconnect, the operating system is the sole owner of shared network resources, meaning that applications cannot have direct access to the network. Instead, applications must rely on the operating system to transfer data from the application's virtual buffer to the network stack and onto the wire, and the operating system at the receiving end must have similar involvement, only in reverse.
In contrast, InfiniBand avoids operating system involvement by bypassing the network stack to create the direct channel for communication between applications at either end. The simple goal of InfiniBand is to provide a message service for an application to communicate directly with another application or storage. Once that is established, the rest of the InfiniBand architecture works to ensure that these channels are capable of carrying messages of varying sizes, to virtual address spaces spanning great physical distances, with isolation and security.
InfiniBand is architected for hardware implementation, unlike TCP which is architected with software implementation in mind. InfiniBand is therefore a lighter weight transport service than TCP in that it does not need to re-order packets, as the lower level link layer provides in-order packet delivery. The transport layer is only required to check the packet sequence and deliver packets in order.
Further, because InfiniBand offers credit-based flow control (where a sender node does not send data beyond the "credit" amount that has been advertised by the receive buffer on the opposite side of a link), the transport layer does not require a drop packet mechanism like the TCP windowing algorithm to determine the optimal number of in-flight packets. This enables efficient products delivering 56 and soon 100Gb/s data rates to applications with very low latency and negligible CPU utilization.
InfiniBand uses RDMA as its method of transferring the data from one end of the channel to the other. RDMA is the ability to transfer data directly between the applications over the network with no operating system involvement and while consuming negligible CPU resources on both sides (zero-copy transfers). The application on the other side simply reads the message directly from the memory, and the message has been transmitted successfully.
This reduced CPU overhead increases the network's ability to move data quickly and allows applications to receive data faster. The time interval for a given quantity of data to be transmitted from source to destination is known as latency, and the lower the latency, the faster the application job completion.
Figure 2: Traditional InterconnectFigure 2: Traditional Interconnect Figure 3: RDMA Zero-Copy InterconnectFigure 3: RDMA Zero-Copy Interconnect
FDR InfiniBand has achieved latency as low as 0.7 microseconds, far and away the lowest latency available for data transmission. InfiniBand's primary advantages over other interconnect technologies include:
  • Higher throughput - InfiniBand constantly supports the highest end-to-end throughput, towards the server and the storage connection
    • In 2008, InfiniBand introduced 40Gb/s (QDR) to the market, while Ethernet supported 10Gb and Fibre Channel only 8Gb
    • In 2011, InfiniBand introduced 56Gb/s (FDR) to the market, while Ethernet supported 40Gb and Fibre Channel only 16Gb
    • 100Gb/s (EDR) InfiniBand products were launched in 2014, and 200Gb/s (HDR) will follow in the next few years, sustaining the market gap with the competitive fabrics
  • Lower latency - RDMA zero-copy networking reduces OS overhead so data can move through the network quickly
  • Enhanced scalability - InfiniBand can accommodate flat networks of around 40,000 nodes in a single subnet and up to 2^128 nodes (virtually an unlimited number) in a global network, based on the same switch components simply by adding additional switches
  • Higher CPU efficiency - With data movement offloads the CPU can spend more compute cycles on its applications, which will reduce run time and increase the number of jobs per day
  • Reduced management overhead - InfiniBand switches can run in Software-Defined Networking (SDN) mode, allowing them to run as part of the fabric without CPU management
  • Simplicity - InfiniBand is exceedingly easy to install when building a simple fat-tree cluster, as opposed to Ethernet which requires knowledge of various advanced protocols to build an IT cluster
Most of all, InfiniBand offers a better return on investment, with higher throughput and CPU efficiency at competitive pricing, equaling higher productivity with a lower cost per endpoint.
Mellanox offers a complete FDR 56Gb/s InfiniBand end-to-end portfolio for data centers and high-performance computing systems, which includes switches and cables. For more information on Mellanox InfiniBand products, please see http://www.mellanox.com/page/products_overview.

Sunday, 26 October 2014

Tuesday, 28 January 2014

wow.,,. wow.,.,,. wow.,., terus kekal langsing ,..,. ade mengalami masalah seperti nie??
1) lemak berlebihan di lengan
2) lemak di peha
3) lemak di perut
4) lemak di pinggul
kini,,,tak perlu runsing dan serabut lg...Dengan OPA SNOW . . teknologi terhebat dri Korea...ramai telah mencubenya.,.,.,


ia bukan minyak,bukan krim mahupun gel... ia adalah mousse...memberikan impak yg lebih berkesan kerana senang menyerap ke dasar kulit...

Harga nye amat berpatutan....dpt kan segera,,harga nya teramat murah,, percaya la... saya sendiri sudah mencubanya....



     

semudah a,b,c . . 


sila usha2 page nie ye.,.,

Dapat kan segera...sebelum kehabisan stok.... terlalu byk permintaan.,.,
whatapps atau wechat pown bley.,., - 0193021668 ~ atira


Tuesday, 30 July 2013

cari duit dan income bulanan dengan senang


duit?? mana pernah cukup kan?,,lagi banyak dapat..lagi banyak kita nak guna..Sebagai student kita kena fikir juga macam mana nak dapat income sendiri...lepas beli barang2,duit minyak sendiri sudah la, senang sangat2 buat nie...taip2 je...yang penting x yah guna modal!!!ehehe....

hah... nak share nie..pernah dengar tak MegaTypers,Qlink??hari nie nak cite sikit pasal MegaTypers.. Megatypers ni ialah laman yang membayar kerja kita dengan pantas dan tulen. Dia bagi perisian + ID, supaya kita boleh bekerja dengan pantas dan mendapat lebih 20 $ sehari.
Kita juga boleh membuat id tanpa had dan menerima bayaran dalam satu akaun. Pembayaran akaun megatypers secara langsung dipindahkan ke Wang / Payza akaun Paypal / Perfect kita.Hebat kan?? sekarang nie,satu family buat bende ni...dapat la income sket..

macam mana nak daftar?? senang je... pergi kat sini.. http://www.megatypers.com/register . . .
register je kat situ.nanti die ade mintak INVITATION CODE,tulis nie = 7BFA