Thursday, January 14, 2016

How to Update/Change Java to a New Version In Linux

  • Download the new JDK
  • Extract it and move it to /usr/lib/jvm. Here you can find all versions of java that you have
  • Now we use a script to change the default java version [0.5 is not the java version]
 wget http://webupd8.googlecode.com/files/update-java-0.5b  
 chmod +x update-java-0.5b  
 sudo ./update-java-0.5b  

Wednesday, November 18, 2015

Delay Locked Loop, High-Level Data Link Control (HDLC)

DLL
Delay-Locked Loop (DLL) supports high-bandwidth data rates between devices. These DLLs are circuits that provide zero propagation delay, low-clock skew between output clock signals throughout a device, and advanced clock domain control. These dedicated DLLs can be used to implement several circuits that improve and simplify system level design.

High-Level Data Link Control (HDLC) 


  • Developed by ISO
  • Support Full Duplex Communication
  • p2p and multipoint data links
  • Use bits [bit oriented - not byte] to set flags

OSI TCP/IP Model

Computer Networks is an interconnected collection of independent computers that are able to exchange information. The connection of these autonomous computers was firstly accomplished with copper wires, but in order to achieve greater speeds fiber optics, microwaves and communication satellites are used also.

   There are five main categories of Networks: 
  1. Local Area Networks  (LANs)
  2. Metropolitan Area Networks  (MANs)
  3. Wide Area Networks  (WANs)
  4. Wireless Networks
  5. Internetworks
All these categories of Networks are classified by their scale. The term scale is used to specify their interprocessor distance meaning the distance the Network is formed. The table below classifies these Networks by scale.

10mRoomLocal Area Network
100mBuildingLocal Area Network
1Km
Campus
Local Area Network
10KmCityMetropolitan Area Network
100KmCountryWide Area Network
1000KmContinentWide Area Network
10000KmPlanetInternet
Classifying these Networks by their distance is very important as different techniques are used at different scales.
As far as this module is concerned LANs and Internetworks will only be discussed in detail later on of this course.



  •      Local Area Networks

LANs are computer Networks that span a small area such as a Room, building or even a campus. There are widely used to connect workstations and personal computers. Each individual computer connected to the network has its own CPU to execute programs but is also able to access data and devices (such as printers) anywhere in the LAN. They have the advantage of transmitting information at very fast rates. Traditional LANs run at speeds 0f 10 - 100Mbit/s but the distances are limited as well as the number of computers that can be attached to a single LAN is restricted. Different kinds of topologies are possible for broadcast LANs. The most commonly used ones are shown in the figure below:
                                             
                                                                        Figure 1: LAN 


Topologies
  1. Star Topology: All devices are connected to a central hub. 
  2. Ring Topology: All devices are connected together forming a ring. Each device is connected directly to two other devices attached to it.
  3. Bus Topology: The devices are connected to a central cable called the bus. Ethernet systems, discussed later on the course, use this kind of topology
  4.                                         

 

  • Protocol Hierarchies


               
                                                             Figure 2: Layers, 




Protocols and Interfaces
In order to understand how the the actual communication is achieved between two remote hosts connected to the same network, a general network diagram is shown above divided into a series of layers. As it seen later on the on the course the actual number as well as their function of each layer differs from network to network. Each layer passes data and control information to the layer below It. As soon as the data are collected form the next layer, some functions are performed there and the data are upgraded and passed to the next layer. This continues until the lowest layer is reached. Actual communication occurs when the information passes layer 1 and reaches the Physical medium. This is shown with the solid lines on the diagram.
Theoretically layer n on one machine maintains a conversation with the same layer in the other machine. The way this conversation is achieved is by the protocol of each layer. Protocol is  collection of rules and conventions as agreement between the communication parties on how communication is to proceed. The later is known as virtual communication and is indicated with the dotted lines on the diagram above.
As far as the above diagram is concerned another important issue to be discussed is the interface between each layer. It defines the services and operation the lower layer offers to the one above It. When a network is built decisions are made to decide how many layers to be included and what each layer should do. So each layer performs a different function and as a result the amount of information past from layer to layer is minimized.



Connection-Oriented service: The user first establishes a connection then uses the connection and then releases the connection. The sender transmits bits of information and the receiver takes them out in the same order as they were originally sent.                
Connectionless: Each packet of information carries the full destination address and is routed independently from the others from the source to destination. Packets may take different routes to the destination and it is possible for two packets sent to the same destination the first one to sent can be delayed and the second one arrives first. So care must be taken in order for the all the bits arrive correctly and in the same order they were sent.


  

OSI Reference Model

This model employs hierarchical structure of seven layers as it is shown in figure 3 below.
Click on the figure to change and rollover with the mouse for the original figure.


                                     
                                         Figure 3: The OSI Reference Model



OSI stands for Open Systems Interconnection. It has 7-layers and attempts to abstract common features common to all approaches to data communications, and organize them into layers so that each layer only worries about the one above it and the one directly below it.  Before getting into details explaining the functions and responsibilities of each layer let me clear one important statement. Although the actual data transmission is vertical, starting from the Application layer of the clients computer all the way to the Application layer of the destination computer, each layer is programmed as though the data transmission were horizontal. This can be observed by clicking on figure 3. In this figure peers are entities comprising the corresponding layers on each machine meaning that the peers that communicate using the protocol. In reality, as I stated above, no data are directly transferred from layer n on one machine to the corresponding layer on another machine.

Physical Layer

The physical layer has as a main function to transmit bits over a communication channel as well as to establish and terminate a connection to a communications medium. It is also responsible to make sure that when one side sends a '1' bit the other side will receive '1' bit and not '0' bit.

Data Link Layer

Data link layer provides means to transfer data between network entities. At the source machine it takes the bit streams of data from the Network Layer breaks into frames and passes them to the physical layer. At the receiving end data link layer detects and possibly corrects the errors that may occur during the transmission and passes the correct stream to the network layer. It's also concerned with flow control techniques. 

Network Layer

This layer performs network routing, flow control and error control functions. Network routing simply means the way  packets are routed from source to destination and flow control .prevents the possibility of congestion between packets which are present in the subnet simultaneously and formbottlenecks.

Transport Layer

The Transport Layer has as a main task to accept data from the Session layer, split them up into smaller units and passes them to the Network layer making sure that all the pieces arrive correctly to the destination. It is the first end-to-end layer all he way from source machine to destination machine unlike the first three layers which are chained having their protocols between each machine. This is shown clearly in the diagram above.

Session Layer

Session layer is responsible for controlling exchange information and for synchronization.

Presentation Layer

It is responsible to translate different data formats from the representation used inside the computer (ASCII) to the network standard representation and back. Computers use different codes for representing character strings so a standard encoding must be used and is handled by the presentation layer. Generally in a few words this layer is concerned with the syntax and semantics of the information transmitted.

Application layer

The upper layer of this model performs common application service for the application processes meaning that software programs are written in the application layer to handle the many different terminal types that exist and map the virtual terminal software onto the real terminal. It contains a variety of protocols and is concerned with file transfer as well as electronic mail,  remote job entry and various other services of general interest.
                                                                             

TCP/IP Reference Model

Figure below shows the OSI and TCP/IP network architectures illustrating the layers of the OSI model and introducing the corresponding layers on TCP/IP model. 


OSI

TCP/IP

Application (layer 7)

Application

       Presentation        (layer 6)

Session (layer 5)
Transport (layer 4)

Transport

Network (layer 3)

Internet

Data Link (layer 2)

Host-to-Network

(Subnet)

Physical (layer 1)

Figure 4: The TCP/IP Reference model.

TCP/IP reference model was named after its two main protocols: TCP (Transmission Control Protocol) and IP (Internet Protocol). This model has the ability to connect multiple networks together in a way so that data transferred from a program in one computer are delivered safely to a similar program on another computer.
Unlike the architecture of OSI model TCP/IP has 4 main layers as indicated in the table above. Before comparing the two models let as know proceed by exploring each layer in detail.
 Host-to-Network Layer:  It translates data and addresses information into format appropriate for an Ethernet Network or Token Ring Network. It uses a protocol (not specified due to lack of information concerned with this layer) in order for the host to connect to the network. Through this layer communication is achieved with physical links such as twisted pair or fiber optics carrying  1's and 0's.

Internet Layer: This layer is a connectionless internetwork layer and defines a connectionless protocol called IP. Its concerned with delivering packets from source to destination. These packets travel independently each taking a different route so may arrive in a different order than they were send. Internet layer does not care about the order the packets arrive at the destination as this job belongs to higher layers.

.Transport Layer: It contains two end-to-end protocols. TCP is a connection oriented protocol and is responsible for keeping track of the order in which packets are sent and reassemble arriving packets in the correct order. It also ensures that a byte stream originating on one machine to be delivered without error on any other machine on the internet. The incoming byte stream is fragmented into discrete messages and is passed to the internet layer. With an inverse process, at the destination, an output stream  is produced by reassembling the received massage.
UDP is the second protocol in this layer and it stands for User Datagram Protocol. In contrast to TCP, UDP is a connectionless protocol used for applications operating on its own flow control independently from TCP. It is also an unreliable protocol and is widely used for applications where  prompt delivery is more important than accurate delivery. such as transmitting speech or video.

Application Layer: Is the upper layer of the model and contains different kinds of  protocols used for many applications. It includes virtual terminalTELNET for remote accessing on a distance machine, File Transfer Protocol FTP and e-mail (SMTP). It also contains protocols like HTTP for fetching pages on the www and others.


OSI  versus  TCP/IP

Till now we have discussed about the key features of each model and now we will talk about the differences and similarities behind the two models.
Starting by stating the ways the models differ. A general statement will be that OSI Reference model was devised before protocols were invented so problems appeared with designing the model as designers didn't have much experience about the subject and did not know what functionality to put in each layer. In TCP/IP the protocols designed first and the model was built based on those protocols so they made an excellent fit. An apparent difference is that OSI has seven layers and TCP/IP has four. The later does not contain Session or Presentation layers simply because It was proven that are of little use to most applications in the OSI model. Another difference is that network layer on OSI model provides both connectionless and connection-oriented services but the corresponding layer in TCP/IP architecture, Internet layer, provides exclusively connectionless communications. TCP/IP model though, supports both modes in Transport layer but  equivalent layer on OSI one model supports only connection-oriented.
Despite all these differences the two models have much in common. They are both based on the concept of a stack of independent protocols and the functionality of each layer is roughly similar.

Until this point we have talked about the two basic network architectures, OSI-RM and TCP/IP. You have maybe heart a lot more about the TCP/IP protocols rather than OSI ones but this doesn't mean that TCP/IP is the most advantageous network architecture to be of guide for designing new networks using new topologies. So further on of this course we will discuss the layers covered in the OSI model (minus Session and Presentation layers) from  as they are more complete for discussing computer Networks.

Tuesday, November 17, 2015

Gigabit ethernet, Bridges, Switches, Hub, Router

Bridge

 

In telecommunication networks, a bridge is a product that connects a local area network (LAN) to another local area network that uses the same protocol (for example, Ethernet or token ring). You can envision a bridge as being a device that decides whether a message from you to someone else is going to the local area network in your building or to someone on the local area network in the building across the street. A bridge examines each message on a LAN, "passing" those known to be within the same LAN, and forwarding those known to be on the other interconnected LAN (or LANs).

In bridging networks, computer or node addresses have no specific relationship to location. For this reason, messages are sent out to every address on the network and accepted only by the intended destination node. Bridges learn which addresses are on which network and develop a learning table so that subsequent messages can be forwarded to the right network.

Bridging networks are generally always interconnected local area networks since broadcasting every message to all possible destinations would flood a larger network with unnecessary traffic. For this reason, router networks such as the Internet use a scheme that assigns addresses to nodes so that a message or packet can be forwarded only in one general direction rather than forwarded in all directions.

A bridge works at the data-link (physical network) level of a network, copying a data frame from one network to the next network along the communications path.

Why Bridge
Bridges are important in some networks because the networks are divided into many parts geographically remote from one another. Something is required to join these networks so that they can become part of the whole network. Take for example a divided LAN, if there is no medium to join these separate LAN parts an enterprise may be limited in its growth potential. The bridge is one of the tools to join these LANS.

Secondly a LAN (for example Ethernet) can be limited in its transmission distance. We can eliminate this problem using bridges as repeaters, so that we can connect a geographically extensive network within the building or campus using bridges. Hence geographically challenged networks can be created using Bridges.

Third, the network administrator can control the amount of traffic going through bridges sent across the expensive network media.

Fourth, the bridge is plug and play device so there is no need to configure the bridge. And suppose any machine was taken out from the network then there is no need for the network administrator to update the bridge configuration information as bridges are self configured.
Hub
A common connection point for devices in a network. Hubs are commonly used to connect segments of a LAN. A hub contains multiple ports. When a packet arrives at one port, it is copied to the other ports so that all segments of the LAN can see all packets.
Switch
NETGEAR network switch
In networks, a device that filters and forwards packets between LAN segments. Switches operate at the data link layer (layer 2) and sometimes the network layer (layer 3) of the OSI Reference Model and therefore support any packet protocol. LANs that use switches to join segments are called switched LANs or, in the case of Ethernet networks, switched Ethernet LANs.
Router
A device that forwards data packets along networks. A router is connected to at least two networks, commonly two LANs or WANs or a LAN and its ISP.s network. Routers are located at gateways, the places where two or more networks connect. Routers use headers and forwarding tables to determine the best path for forwarding the packets, and they use protocols such as ICMP to communicate with each other and configure the best route between any two hosts.

Routers work in a manner similar to switches and bridges in that they filter out network traffic. Rather than doing so by packet addresses, they filter by specific protocol. Routers were born out of the necessity for dividing networks logically instead of physically. An IP router can divide a network into various subnets so that only traffic destined for particular IP addresses can pass between segments. Routers recalculate the checksum, and rewrite the MAC header of every packet. The price paid for this type of intelligent forwarding and filtering is usually calculated in terms of latency, or the delay that a packet experiences inside the router. Such filtering takes more time than that exercised in a switch or bridge which only looks at the Ethernet address. In more complex networks network efficiency can be improved. An additional benefit of routers is their automatic filtering of broadcasts, but overall they are complicated to setup.

Hubs and switches


Each serves as a central connection for all of your network equipment and handles a data type known as frames. Frames carry your data. When a frame is received, it is amplified and then transmitted on to the port of the destination PC. The big difference between these two devices is in the method in which frames are being delivered.



In a hub, a frame is passed along or "broadcast" to every one of its ports. It doesn't matter that the frame is only destined for one port. The hub has no way of distinguishing which port a frame should be sent to. Passing it along to every port ensures that it will reach its intended destination. This places a lot of traffic on the network and can lead to poor network response times.



Additionally, a 10/100Mbps hub must share its bandwidth with each and every one of its ports. So when only one PC is broadcasting, it will have access to the maximum available bandwidth. If, however, multiple PCs are broadcasting, then that bandwidth will need to be divided among all of those systems, which will degrade performance.



A switch, however, keeps a record of the MAC addresses of all the devices connected to it. With this information, a switch can identify which system is sitting on which port. So when a frame is received, it knows exactly which port to send it to, without significantly increasing network response times. And, unlike a hub, a 10/100Mbps switch will allocate a full 10/100Mbps to each of its ports. So regardless of the number of PCs transmitting, users will always have access to the maximum amount of bandwidth. It's for these reasons a switch is considered to be a much better choice than a hub.


Gigabit Ethernet

Gigabit Ethernet (GbE or 1 GigE) is a term describing various technologies for transmitting Ethernet frames at a rate of a gigabit per second (1,000,000,000 bits per second), as defined by the IEEE 802.3-2008 standard. It came into use beginning in 1999, gradually supplanting Fast Ethernet in wired local networks, where it performed considerably faster. The cables and equipment are very similar to previous standards and have been very common and economical since 2010.



Gigabit Ethernet is carried primarily on optical fiber (with very short distances possible on copper media). Existing Ethernet LANs with 10 and 100 Mbps cards can feed into a Gigabit Ethernet backbone. An alternative technology that competes with Gigabit Ethernet is ATM. A newer standard, 10-Gigabit Ethernet, is also becoming available.



Physical

1000Base-SX
Short wavelength, multimode fiber
1000Base-LX
Long wavelength, Multi or single mode fiber
1000Base-CX
Copper jumpers <25m, shielded twisted pair
1000Base-T
4 pairs, cat 5 UTP
Signaling - 8B/10B




Datalink Layer, Framing, MAC, ALOHA, CSMA, 802.3, 802.5


 

Introduction.

This layer has as a primary responsibility to provide error-free transmission of data information between two remote hosts (computers) attached to the same physical cable. At the source machine it receives the data from the Networks Layer, groups them into frames and from there are sent to the destination machine. From that point the data are received from the Data Layer at the destination, a checksum is computed there to make sure that the frames sent are identical with those received and eventually the data are passed to the Network Layer.
Although the actual transmission  is end-to-end, it is easier to think in terms of the two Data Link Layers processing communication using a data link protocol via a virtual data path.. The actual data path follows the route depicted in the previous tutorials (Source machine: Network Layer - Data Link Layer - Physical Layer - cable --> Destination: cable - Physical Layer - Data Link Layer - Network Layer) as shown in figure below.

There are three basic services that Data Link Layer commonly provides:
  1. Unacknowledged connectionless service.
  2. Acknowledged connectionless service.
  3. Acknowledged connection oriented service
In the first case frames are sent independently to destination without the destination machine acknowledge them. In case of a frame is lost no attempt is made by the Data Link Layer to recover it. Unacknowledged connectionless service is useful when the error rate is very small and the recovery of the frame is made by higher layers in the Network hierarchy. Also LANs find this service appropriate for real-time traffic such as speech, in which late data are worse than dad data. Maybe you have personally experienced this case, where delay of data occurs in a computer to computer conversation. Imagine maintaining a computer to computer conversation with an another person. It would be much better if the data were sent and received on time (as if the dialog was carried out via a normal telephone line) but a bit distorted instead of data received after 2sec delay and in better quality.
The second case is a more reliable service in which every single frame, as soon as it arrives to destination machine is individually acknowledged. In this way the sender knows whether or not the frame arrived safely to the destination. Acknowledged connectionless service is useful in unreliable channels such as wireless systems.
Finally we have acknowledged connection oriented service. The source and destination machines establish a connection before any data are transferred. Each frame sent is number, as if it has a specific "ID", and the data link layer guarantees that each frame sent is indeed received by the other end exactly once and in the right order. This service is said to be the most sophisticated service the data link layer can provide to Network layer.

Framing

Framing is a technique performed by the Data Link layer. In the source machine Data link layer receives a bit stream of data  from the network layer. It breaks the bit stream into discrete frames and computes a checksum, Then the frame is sent to the destination machine where the checksum is recomputed. In case were it is different from the one contained in the frame an error has occurred and data link layer discards it and sends an error report.
There are many methods of breaking a bit stream into frames but I will like to concentrate in only two of them. This procedure might appear easy but instead is a very delicate method as there is difficulty by the receiving end to distinguish among the frames that were sent.

The first method is called character stuffing. There is a specific sequence of characters representing the start and the end of each frame. Start is represented with DLE STX and the end with DLE ETX. (DLE stands for dada link escape, STX start of text and ETX end of text). So in case the destination loses track of the frame boundaries, it looks for the this sequence of characters to figure out where it is. The problem with this approach is that these bit pattern might occur within the data sequence. In order to overcome this problem the sender's data link layer inserts an ASCII DLE character just before each "accidental" DLE character in the data. In the receiving machine, data link layer removes this stuffed character and passes the original data to the network layer. The technique is shown graphically in the figure below.


    The first sequence shows the original data sent by the network layer to data link layer. Case (b) shows the data after being stuffed and case (c) are the data passed to the network layer on the receiving machine.

Another technique used for framing is called bit stuffing. It is analogous to character stuffing but instead of ASCII characters it adds bits to a bit stream of data. The beginning and end of a frame contains a special pattern of 01111110 called a flag byte. Therefore, if the actual data being transmitted has six 1's in a row, a zero is inserted after the first 5 1's so that the data is not interpreted as a frame delimiter. On the receiving end, the stuffed bits are discarded, in the same way as in character stuffing technique explained before, and passed to the network layer. A demonstration of this technique can be shown in the diagram below below: 


(a) Is the original bit stream
(b) Shows the data after being stuffed in the source's machine data link layer. Whenever it counters five consecutive ones in the data, it automatically stuffs a 0 bit into the stream.
(c) The after destuffing by the receiver's data link layer.


There are mainly two conditions of transmitting data frames handled by protocols in this layer. The first one is data frames are sent in only one direction. Meaning that only one machine wishes to transmit data to another machine as shown below.

 
Sender transmits data frame data1. The receiving machine receives the frame and sends an acknowledgement. As soon as the acknowledgement goes back to the sender another data frame is transmitted, data2. For some reasons data2 does not arrive at the destination so the receiver does not send an acknowledgement as it has never received the frame. Sender waits for certain time to receive the acknowledgement. Time out occurs as the acknowledgement did not arrive and sender retransmits the same data frame. This time the data arrive correctly and an acknowledgement of data2 is sent by the receiver.
There are cases though, that there is need for data to be sent in both directions simultaneously. One way to achieve this is by having two separate communication channels, one for data and one for acknowledgements. In this case the reverse channel for acknowledgements is completely wasted as we have two circuits and we use the capacity of one. A better idea is to use the same circuit for data in both directions. So by intermixing data frames from A and B with the acknowledgement frame from A to B and placing a header to each frame we can distinguish the data frame from the acknowledged one. When a data frame arrives, instead of immediately sending a separate control frame, the receiver restrains itself and waits until the network layer passes it the next packet. The acknowledgment is attached to the outgoing data frame. In effect, the acknowledgment gets a free ride on the next outgoing data frame. This technique is widely known as piggybacking.

HDLC

An example of what we have seen so far is the HDLC (High-level Data Link Protocol) commonly known as X.25. It is a widely used data link protocol,  that is bit oriented and uses bit stuffing, like all the other protocols originated in this layer. All bit oriented protocols use a common frame structure shown in the following figure.

You can recognize the frame boarders as they consists of the same bit pattern of 01111110 (flag byte) to indicate the start and the end of the frame. It contains anaddress field used to identify one of the terminals on multi-drop lines. The control field used for sequence numbers, acknowledgement and other purposes. The datafield that contain arbitrary information and the Check-Sum field.

The Medium Access Sublayer (MAC)

This section deals with broadcast networks and their protocols. The basic idea behind broadcast networks is how to determine who gets to use the channel when many users want to transmit over it. The protocols used to determine who goes next on a multiaccess channel belong to a sublayer of the data link layer called MAC.

 

Pure ALOHA

One of the newly discovered algorithms-protocols for allocating a multiple access channel is ALOHA. The idea is simple. Users transmit whenever they have data to be sent. Frames are destroyed when collision occurs. When a sender detects a collision waits for a random amount of time and retransmits the frame. With this method the best theoretical throughput and channel utilization we can have is 18%. Term throughput means the amount of work that a computer can do in a given time period

Slotted ALOHA

In slotted ALOHA time is divided into discrete intervals, each corresponding to one frame. A computer is not permitted to send whenever it has data to send. Instead it is required to wait for the next available slot. The best it can be achieved is 37% of slots empty, 37% success and 26% collision.

Nonpersistent CSMA (Carrier Sense Multiple Access)

Before sending, a station senses the channel. If no one else is sending, the station begins doing so itself. However, if the channel is already in use, waits a random time  and then repeats the algorithm.

1-Persistent CSMA/CD (Carrier Sense Multiple Access with Collision Detection)

Is an improvement of the previous techniques When a station wants to transmit listens to the cable (carrier sense). If its busy waits until it goes idle, otherwise it transmits. If two or more stations simultaneously begin transmitting on an idle cable they will collide. As soon as they detect a collision stations abort their transmission (collision detection). This is very important enhancement as it saves time and bandwidth. Then stations wait a random time and repeat the whole process al over again. CSMA/CD is widely used on LANs in MAC sublayer such as Ethernet, Token Ring, Token bus etc.

 


IEEE Standard 802.3 and Ethernet

Ethernet is the most widely-installed Local Area Network technology. Is specified in a standard called IEEE 802.3 (Institute of Electrical and Electric Engineers). An Ethernet LAN typically uses coaxial cable or twisted pair wires. It provides speeds up to 10 Mbps. The devices connected to the LAN compete for access using CSMA/CD protocol. The figure below is an animated gif explaining the basic operation of an Ethernet. Click on the image to restart it.



Machine 2 wants to send a message to machine 4, but first it 'listens' to the cable to make sure that no one else is using the network.
If it is all clear it starts to transmit its data on to the network (represented by the yellow flashing screens). Each packet of data contains the destination address, the senders address and the data to be transmitted.
The signal moves down the cable and is received by every machine on the network but because it is only addressed to number 4, the other machines
ignore it.
Machine 4 then sends a message back to number 1 acknowledging receipt of the data (represented by the purple flashing screens).

As I stated before, there is a possibility of two machines try to transmit simultaneously over the cable. The result can be observed in the following animated figure.


What happens is that machine 2 and 5 decide to transmit at the same time.
The packets collide and each machine has the ability to detect the collision and immediately abort transmission.
Then they wait for random  period of time and transmit again.

 

IEEE Standard 802.5: Token Ring

The token ring protocol is the second most widely-used protocol on local area networks after Ethernet. The IEEE 802.5 token ring technology provides for data transfer rates of either 4 or 16 Mbps. It is a collection of individual point-to-point  links, connecting each terminal, that happen to form a circle. The token ring operation could be seen with the help of the animated figure below.
How it works: A special 3 byte bit pattern called a "token" circulates around the ring. A station wishing the transmit on the ring must seize the token. The station then alters one bit of the token which then becomes the first part of the normal data frame the station wishes to transmit. Only having one token on the ring means that only one station can transmit at a time. This solves the problem of contention and access to the common media.


In the example, machine 1 wants to send some data to machine 4. It captures the token, writes its data and the recipient's address onto the Token (indicated by the yellow flashing screen). The packet of data travels first to machines 2 and 3 that read the address, realize it is not its own, and pass the token to machine 4. This time it is the correct address and so number 4 stores the packet (represented by the yellow flashing screen).
Then machine 4 sends an acknowledgement back to machine 1 to say that it has received the packet (represented by the purple flashing screen).
 Machine 5 and 6 forward the acknowledgement to machine 1, who sent the original message.
As soon as Machine 1 receives the acknowledgement, from machine 4 (indicated by the purple flashing screen)  regenerates the free Token back on to the ring ready for the next machine to use.