• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/308

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

308 Cards in this Set

  • Front
  • Back

Network-Aware Applications

Implement the Application layer protocols and are able to communicate directly with the lower layers of the protocol stack.

E-mail clients and web browsers are examples of these types of applications.
Application Layer Protocols
Protocols establish consistent rules for exchanging data between applications and services loaded on the participating devices.

Protocols specify how data inside the messages is structured and the types of messages that are sent between source and destination.

These messages can be requests for services, acknowledgments, data messages, status messages, or error messages. Protocols also define message dialogues, ensuring that a message being sent is met by the expected response and the correct services are invoked when data transfer occurs.
Application Layer Services
Other programs may need the assistance of Application layer services to use network resources, like file transfer or network print spooling. Though transparent to the user, these services are the programs that interface with the network and prepare the data for transfer.

Different types of data - whether it is text, graphics, or video - require different network services to ensure that it is properly prepared for processing by the functions occurring at the lower layers of OSI model.
Daemons
Daemons are described as "listening" for a request from a client, because they are programmed to respond whenever the server receives a request for the service provided by the daemon.

When a daemon "hears" a request from a client, it exchanges appropriate messages with the client, as required by its protocol, and proceeds to send the requested data to the client in the proper format.
Peer-to-Peer Networks
In a peer-to-peer network, two or more computers are connected via a network and can share resources (such as printers and files) without having a dedicated server. Every connected end device (known as a peer) can function as either a server or a client.
Peer-to-Peer Applications
A peer-to-peer application (P2P), unlike a peer-to-peer network, allows a device to act as both a client and a server within the same communication. In this model, every client is a server and every server a client. Both can initiate a communication and are considered equal in the communication process.

However, peer-to-peer applications require that each end device provide a user interface and run a background service. When you launch a specific peer-to-peer application it invokes the required user interface and background services. After that the devices can communicate directly.

Some P2P applications use a hybrid system where resource sharing is decentralized but the indexes that point to resource locations are stored in a centralized directory. In a hybrid system, each peer accesses an index server to get the location of a resource stored on another peer.

The index server can also help connect two peers, but once connected, the communication takes place between the two peers without additional communication to the index server.

Peer-to-peer applications can be used on peer-to-peer networks, client/server networks, and across the Internet.
DNS Services
The Domain Name System (DNS) was created for domain name to address resolution for these networks. DNS uses a distributed set of servers to resolve the names associated with these numbered addresses.

DNS protocol communications use a single format called a message. This message format is used for all types of client queries and server responses, error messages, and the transfer of resource record information between servers.

The DNS client is sometimes called the DNS resolver
DNS Records Types
A DNS server provides the name resolution using the name daemon, which is often called named, (pronounced name-dee).

The DNS server stores different types of resource records used to resolve names. These records contain the name, address, and type of record.
Some of these record types are:

• A - an end device address
• NS - an authoritative name server
• CNAME - the canonical name (or Fully Qualified Domain Name) for an alias; used when multiple services have the single network address but each service has its own entry in DNS
• MX - mail exchange record; maps a domain name to a list of mail exchange servers for that domain
DNS Hierarchy
The Domain Name System uses a hierarchical system to create a name database to provide name resolution. The hierarchy looks like an inverted tree with the root at the top and branches below.

At the top of the hierarchy, the root servers maintain records about how to reach the top-level domain servers, which in turn have records that point to the secondary level domain servers and so on.
Authoritative DNS Server
If a given server has resource records that correspond to its level in the domain hierarchy, it is said to be authoritative for those records
Domain Name System (DNS) Port Number
TCP/UDP Port 53
Hypertext Transfer Protocol (HTTP) Port Number
TCP Port 80
File Transfer Protocol (FTP) Port Number
TCP Ports 20 and 21

Port 21 is for control traffic

Port 20 is for data transfer
Dynamic Host Configuration Protocol Port Number
UDP Ports 67 and 68
Telnet Port Number
TCP Port 23
Post Office Protocol (POP) Port Number
TCP Port 110
Simple Mail Transfer Protocol (SMTP) Port Number
TCP Port 25
Hypertext Transfer Protocol (HTTP)
one of the protocols in the TCP/IP suite, was originally developed to publish and retrieve HTML pages

now used for distributed, collaborative information systems.
HTTP GET Command
GET is a client request for data. A web browser sends the GET message to request pages from a web server. Once the server receives the GET request, it responds with a status line, such as HTTP/1.1 200 OK, and a message of its own, the body of which may be the requested file, an error message, or some other information.
HTTP POST/PUT Commands
POST and PUT are used to send messages that upload data to the web server.

When the user enters data into a form embedded in a web page, POST includes the data in the message sent to the server. The POST messages upload information to the server in plain text that can be intercepted and read

PUT uploads resources or content to the web server.
HTTPS
HTTP Secure (HTTPS) protocol is used for accessing or posting web server information. HTTPS can use authentication and encryption to secure data as it travels between the client and server. HTTPS specifies additional rules for passing data between the Application layer and the Transport Layer.
Simple Mail Transfer Protocol (SMTP)
The Simple Mail Transfer Protocol (SMTP), on the other hand, governs the transfer of outbound e-mail from the sending client to the e-mail server (MDA), as well as the transport of e-mail between e-mail servers (MTA). SMTP enables e-mail to be transported across data networks between different types of server and client software and makes e-mail exchange over the Internet possible.
SMTP Commands
• HELO - identifies the SMTP client process to the SMTP server process
• EHLO - Is a newer version of HELO, which includes services extensions
• MAIL FROM - Identifies the sender
• RCPT TO - Identifies the recipient
• DATA - Identifies the body of the message
POP and POP3 (Post Office Protocol, version 3)
POP and POP3 (Post Office Protocol, version 3) are inbound mail delivery protocols and are typical client/server protocols. They deliver e-mail from the e-mail server to the client (MUA). The MDA listens for when a client connects to a server. Once a connection is established, the server can deliver the e-mail to the client.
Mail Transfer Agent (MTA)
The Mail Transfer Agent (MTA) process is used to forward e-mail. The MTA receives messages from the MUA or from another MTA on another e-mail server. Based on the message header, it determines how a message has to be forwarded to reach its destination. If the mail is addressed to a user whose mailbox is on the local server, the mail is passed to the MDA. If the mail is for a user not on the local server, the MTA routes the e-mail to the MTA on the appropriate server.
Mail Delivery Agent (MDA)
The Mail Delivery Agent (MDA) accepts a piece of e-mail from a Mail Transfer Agent (MTA) and performs the actual delivery. The MDA receives all the inbound mail from the MTA and places it into the appropriate users' mailboxes.

The MDA can also resolve final delivery issues, such as virus scanning, spam filtering, and return-receipt handling. Most e-mail communications use the MUA, MTA, and MDA applications
Computers that do not have an MUA can still connect to a mail service on a web browser in order to retrieve and send messages in this manner.
File Transfer Protocol (FTP)
The File Transfer Protocol (FTP) is another commonly used Application layer protocol. FTP was developed to allow for file transfers between a client and a server. An FTP client is an application that runs on a computer that is used to push and pull files from a server running the FTP daemon (FTPd).

The file transfer can happen in either direction. The client can download (pull) a file from the server or, the client can upload (push) a file to the server.
FTP Server Transfer Process
To successfully transfer files, FTP requires two connections between the client and the server: one for commands and replies, the other for the actual file transfer.

The client establishes the first connection to the server on TCP port 21. This connection is used for control traffic, consisting of client commands and server replies.

The client establishes the second connection to the server over TCP port 20. This connection is for the actual file transfer and is created every time there is a file transferred.
Dynamic Host Configuration Protocol (DHCP)
The Dynamic Host Configuration Protocol (DHCP) service enables devices on a network to obtain IP addresses and other information from a DHCP server. This service automates the assignment of IP addresses, subnet masks, gateway and other IP networking parameters.

DHCP allows a host to obtain an IP address dynamically when it connects to the network. The DHCP server is contacted and an address requested. The DHCP server chooses an address from a configured range of addresses called a pool and assigns ("leases") it to the host for a set period.
DHCP - How a server assigns IP Addresses
When a DHCP-configured device boots up or connects to the network, the client broadcasts a DHCP DISCOVER packet to identify any available DHCP servers on the network.

A DHCP server replies with a DHCP OFFER, which is a lease offer message with an assigned IP address, subnet mask, DNS server, and default gateway information as well as the duration of the lease. The client may receive multiple DHCP OFFER packets if there is more than one DHCP server on the local network, so it must choose between them,

To accept an offer, the client broadcasts a DHCP REQUEST packet that identifies the explicit server and lease offer that the client is accepting. A client may choose to request an address that it had previously been allocated by the server.
DHCP - How a computer accepts an IP Address
Assuming that the IP address requested by the client, or offered by the server, is still valid, the server would return a DHCP ACK (Acknowledgement) message that acknowledges to the client the lease is finalized.

If the offer is no longer valid - perhaps due to a time-out or another client allocating the lease - then the selected server will respond with a DHCP NAK message (Negative Acknowledgement). If a DHCP NAK message is returned, then the selection process must begin again with a new DHCP DISCOVER message being transmitted.

Once the client has the lease, it must be renewed prior to the lease expiration through another DHCP REQUEST message.
Server Message Block (SMB)
The Server Message Block (SMB) is a client/server file sharing protocol. Describes the structure of shared network resources, such as directories, files, printers, and serial ports. It is a request-response protocol. Unlike the file sharing supported by FTP, clients establish a long term connection to servers. Once the connection is established, the user of the client can access the resources on the server as if the resource is local to the client host.

Developed by IBM - now supported by Microsoft.
Gnutella P2P Operation
Gnutella-compatible client software allows users to connect to Gnutella services over the Internet and to locate and access resources shared by other Gnutella peers.

Do not use a central database to record all the files available on the peers. Instead, the devices on the network each tell the other what files are available when queried and use the Gnutella protocol and services to support locating resources.

The client applications will search for other Gnutella nodes to connect to. These nodes handle queries for resource locations and replies to those requests. They also govern control messages, which help the service discover other nodes. The actual file transfers usually rely on HTTP services.
Gnutella Protocol Packets
The Gnutella protocol defines five different packet types:

ping - for device discovery
pong - as a reply to a ping
query - for file location
query hit - as a reply to a query
push - as a download request
Telnet
Telnet uses software to create a virtual device that provides the same features of a terminal session with access to the server command line interface (CLI).

A connection using Telnet is called a Virtual Terminal (VTY) session, or connection.

Once a Telnet connection is established, users can perform any authorized function on the server, just as if they were using a command line session on the server itself. If authorized, they can start and stop processes, configure the device, and even shut down the system.
Virtual Terminal (VTY)
Telnet session - plain text session to access a servers command line interface (CLI).
Telnet daemon
The server runs a service called the Telnet daemon. A virtual terminal connection is established from an end device using a Telnet client application. Most operating systems include an Application layer Telnet client.
Telnet Commands
Each Telnet command consists of at least two bytes. The first byte is a special character called the Interpret as Command (IAC) character. As its name implies, the IAC defines the next byte as a command rather than text.

• Are You There (AYT) - Lets the user request that something appear on the terminal screen to indicate that the VTY session is active.
• Erase Line (EL) - Deletes all text from the current line.
• Interrupt Process (IP) - Suspends, interrupts, aborts, or terminates the process to which the Virtual Terminal is connected. For example, if a user started a program on the Telnet server via the VTY, he or she could send an IP command to stop the program.
Secure Shell (SSH) protocol
While the Telnet protocol supports user authentication, it does not support the transport of encrypted data. All data exchanged during a Telnet sessions is transported as plain text across the network. This means that the data can be intercepted and easily understood.

The Secure Shell (SSH) protocol offers an alternate and secure method for server access. SSH provides the structure for secure remote login and other secure network services. It also provides stronger authentication than Telnet and supports the transport of session data using encryption.
Transport Layer Functions
The Transport layer encompasses these functions:
• Enables multiple applications to communicate over the network at the same time on a single device
• Ensures that, if required, all the data is received reliably and in order by the correct application
• Employs error handling mechanisms
• Tracking the individual communication between applications on the source and destination hosts
• Segmenting data and managing each piece
• Reassembling the segments into streams of application data
• Identifying the different applications
Port Numbers
In order to pass data streams to the proper applications, the Transport layer must identify the target application. To accomplish this, the Transport layer assigns an application an identifier.

The TCP/IP protocols call this identifier a port number. Each software process that needs to access the network is assigned a port number unique in that host. This port number is used in the Transport layer header to indicate to which application that piece of data is associated.
Interleaved (multiplexed)
Dividing data into small parts, and sending these parts from the source to the destination, enables many different communications to be interleaved (multiplexed) on the same network.

At the Transport layer, each particular set of pieces flowing between a source application and a destination application is known as a conversation.

To identify each segment of data, the Transport layer adds to the piece a header containing binary data. This header contains fields of bits. It is the values in these fields that enable different Transport layer protocols to perform different functions.
Same Order Delivery
Networks may provide multiple routes that can have different transmission times, data can arrive in the wrong order. By numbering and sequencing the segments, the Transport layer can ensure that these segments are reassembled into the proper order.
Flow Control
Network hosts have limited resources, such as memory or bandwidth. When Transport layer is aware that these resources are overtaxed, some protocols can request that the sending application reduce the rate of data flow. This is done at the Transport layer by regulating the amount of data the source transmits as a group. Flow control can prevent the loss of segments on the network and avoid the need for retransmission.
User Datagram Protocol (UDP)
UDP is a simple, connectionless protocol, described in RFC 768. It has the advantage of providing for low overhead data delivery. The pieces of communication in UDP are called datagrams. These datagrams are sent as "best effort" by this Transport layer protocol.
Applications that use UDP include:
• Domain Name System (DNS)
• Video Streaming
• Voice over IP (VoIP)
Transmission Control Protocol (TCP)
TCP is a connection-oriented protocol, described in RFC 793. TCP incurs additional overhead to gain functions. Additional functions specified by TCP are the same order delivery, reliable delivery, and flow control.
Applications that use TCP are:
• Web Browsers
• E-mail
• File Transfers
TCP Header Size
Each TCP segment has 20 bytes of overhead in the header encapsulating the Application layer data
UDP Header Size
Each UDP segment has 8 bytes of overhead in the header encapsulating the Application layer data
Client Port Numbers
The source port in a segment or datagram header of a client request is randomly generated from port numbers greater than 1023.

As long as it does not conflict with other ports in use on the system, the client can choose any port number from the range of default port numbers used by the operating system. This port number acts like a return address for the requesting application.
Sockets
The combination of the Transport layer port number and the Network layer IP address assigned to the host uniquely identifies a particular process running on a specific host device. This combination is called a socket.

A socket pair, consisting of the source and destination IP addresses and port numbers, is also unique and identifies the conversation between the two hosts.
Internet Assigned Numbers Authority (IANA)
The Internet Assigned Numbers Authority (IANA) assigns port numbers. IANA is a standards body that is responsible for assigning various addressing standards.
Well Known Ports
0 to 1023

These numbers are reserved for services and applications. They are commonly used for applications such as HTTP (web server) POP3/SMTP (e-mail server) and Telnet. By defining these well-known ports for server applications, client applications can be programmed to request a connection to that specific port and its associated service.
Registered Ports
1024 to 49151

These port numbers are assigned to user processes or applications. These processes are primarily individual applications that a user has chosen to install rather than common applications that would receive a Well Known Port. When not used for a server resource, these ports may also be used dynamically selected by a client as its source port.
Dynamic or Private Ports - (Ephemeral Ports)
Numbers 49152 to 65535

Also known as Ephemeral Ports, these are usually assigned dynamically to client applications when initiating a connection. It is not very common for a client to connect to a service using a Dynamic or Private Port (although some peer-to-peer file sharing programs do).
Netstat
Sometimes it is necessary to know which active TCP connections are open and running on a networked host. Netstat is an important network utility that can be used to verify those connections. Netstat lists the protocol in use, the local address and port number, the foreign address and port number, and the state of the connection.
Which Transport protocol uses connection-oriented sessions
The key distinction between TCP and UDP is reliability. The reliability of TCP communication is performed using connection-oriented sessions.

Before a host using TCP sends data to another host, the Transport layer initiates a process to create a connection with the destination. This connection enables the tracking of a session, or communication stream between the hosts. This process ensures that each host is aware of and prepared for the communication. A complete TCP conversation requires the establishment of a session between the hosts in both directions.
TCP Session Acknowledgments
After a session has been established, the destination sends acknowledgements to the source for the segments that it receives. These acknowledgements form the basis of reliability within the TCP session. As the source receives an acknowledgement, it knows that the data has been successfully delivered and can quit tracking that data. If the source does not receive an acknowledgement within a predetermined amount of time, it retransmits that data to the destination.
Open Ports
When an active server application is assigned to a specific port, that port is considered to be "open" on the server. This means that the Transport layer accepts and processes segments addressed to that port. Any incoming client request addressed to the correct socket is accepted and the data is passed to the server application. There can be many simultaneous ports open on a server, one for each active server application.
The three-way handshake
The three-way handshake:

1 - Establishes that the destination device is present on the network

2 - Verifies that the destination device has an active service and is accepting requests on the destination port number that the initiating client intends to use for the session

3 - Informs the destination device that the source client intends to establish a communication session on that port number
The three steps in TCP connection establishment
The three steps in TCP connection establishment are:

1. The initiating client sends a segment containing an initial sequence value, which serves as a request to the server to begin a communications session.

2. The server responds with a segment containing an acknowledgement value equal to the received sequence value plus 1, plus its own synchronizing sequence value. The value is one greater than the sequence number because the ACK is always the next expected Byte or Octet. This acknowledgement value enables the client to tie the response back to the original segment that it sent to the server.

3. Initiating client responds with an acknowledgement value equal to the sequence value it received plus one. This completes the process of establishing the connection.
TCP Segment Headers (flags)
Within the TCP segment header, there are six 1-bit fields that contain control information used to manage the TCP processes. Those fields are:
• URG - Urgent pointer field significant
• ACK - Acknowledgement field significant
• PSH - Push function
• RST - Reset the connection
• SYN - Synchronize sequence numbers
• FIN - No more data from sender

These fields are referred to as flags, because the value of one of these fields is only 1 bit and, therefore, has only two values: 1 or 0. When a bit value is set to 1, it indicates what control information is contained in the segment.
Steps in TCP Session Termination
1. When the client has no more data to send in the stream, it sends a segment with the FIN flag set.
2. The server sends an ACK to acknowledge the receipt of the FIN to terminate the session from client to server.
3. The server sends a FIN to the client, to terminate the server to client session.
4. The client responds with an ACK to acknowledge the FIN from the server.
initial sequence number (ISN)
During TCP session setup, an initial sequence number (ISN) is set. This initial sequence number represents the starting value for the bytes for this session that will be transmitted to the receiving application.
TCP segment Acknowledgement
The segment header sequence number and acknowledgement number are used together to confirm receipt of the bytes of data contained in the segments. The sequence number is the relative number of bytes that have been transmitted in this session plus 1 (which is the number of the first data byte in the current segment).

TCP uses the acknowledgement number in segments sent back to the source to indicate the next byte in this session that the receiver expects to receive.

This is called expectational acknowledgement.
Expectational Acknowledgement
TCP uses the acknowledgement number in segments sent back to the source to indicate the next byte in this session that the receiver expects to receive.
TCP Window Size
To reduce the overhead of acknowledgements, multiple segments of data can be sent before and acknowledged with a single TCP message in the opposite direction. This acknowledgement contains an acknowledgement number based on the total number of bytes received in the session.

The amount of data that a source can transmit before an acknowledgement must be received is called the window size. Window Size is a field in the TCP header that enables the management of lost data and flow control.
TCP Segment Retransmission
When TCP at the source host has not received an acknowledgement after a predetermined amount of time, it will go back to the last acknowledgement number that it received and retransmit data from that point forward.

For a typical TCP implementation, a host may transmit a segment, put a copy of the segment in a retransmission queue, and start a timer. When the data acknowledgment is received, the segment is deleted from the queue. If the acknowledgment is not received before the timer expires, the segment is retransmitted.
Selective Acknowledgements
Hosts today may also employ an optional feature called Selective Acknowledgements. If both hosts support Selective Acknowledgements, it is possible for the destination to acknowledge bytes in discontinuous segments and the host would only need to retransmit the missing data.

Used with TCP for data validation and error correction.
Flow Control
TCP also provides mechanisms for flow control. Flow control assists the reliability of TCP transmission by adjusting the effective rate of data flow between the two services in the session. When the source is informed that the specified amount of data in the segments is received, it can continue sending more data for this session.

If Segments are not reaching their destination, the window size might be reduced.
Dynamic Window Sizes
A way to control the data flow is to use dynamic window sizes. When network resources are constrained, TCP can reduce the window size to require that received segments be acknowledged more frequently. This effectively slows down the rate of transmission because the source waits for data to be acknowledged more frequently.

If the destination needs to slow down the rate of communication because of limited buffer memory, it can send a smaller window size value to the source as part of an acknowledgement.
User Datagram Paradigm
UDP is a simple protocol that provides the basic Transport layer functions. It has a much lower overhead than TCP, since it is not connection-oriented and does not provide the sophisticated retransmission, sequencing, and flow control mechanisms.

This does not mean that applications that use UDP are always unreliable. It simply means that these functions are not provided by the Transport layer protocol and must be implemented elsewhere if required.
Connectionless or Transaction-Based
Because UDP is connectionless, sessions are not established before communication takes place as they are with TCP. UDP is said to be transaction-based. In other words, when an application has data to send, it simply sends the data.
Randomized source port numbers
Randomized source port numbers also help with security. If there is a predictable pattern for destination port selection, an intruder can more easily simulate access to a client by attempting to connect to the port number most likely to be open.
Source and Destination Port Addresses
Once a client has chosen the source and destination ports, the same pair of ports is used in the header of all datagrams used in the transaction. For the data returning to the client from the server, the source and destination port numbers in the datagram header are reversed.
Network PDU Packet
During the encapsulation process, Layer 3 receives the Layer 4 PDU and adds a Layer 3 header, or label, to create the Layer 3 PDU. When referring to the Network layer, we call this PDU a packet. When a packet is created, the header must contain, among other information, the address of the host to which it is being sent. This address is referred to as the destination address. The Layer 3 header also contains the address of the originating host. This address is called the source address.
Network Layer Protocols
Protocols implemented at the Network layer that carry user data include:
• Internet Protocol version 4 (IPv4)
• Internet Protocol version 6 (IPv6)
• Novell Internetwork Packet Exchange (IPX)
• AppleTalk
• Connectionless Network Service (CLNS/DECNet)
IP version 6 (IPv6)
IP version 6 (IPv6) is developed and being implemented in some areas. IPv6 will operate alongside IPv4 and may replace it in the future. The services provided by IP, as well as the packet header structure and contents, are specified by either IPv4 protocol or IPv6 protocol. These services and packet structure are used to encapsulate UDP datagrams or TCP segments for their trip across an internetwork.
IPv4 basic characteristics
IPv4 basic characteristics:
• Connectionless - No connection is established before sending data packets.
• Best Effort (unreliable) - No overhead is used to guarantee packet delivery.
• Media Independent - Operates independently of the medium carrying the data.
Best Effort Service (unreliable)
The IP protocol does not burden the IP service with providing reliability. Compared to a reliable protocol, the IP header is smaller. Transporting these smaller headers requires less overhead. Less overhead means less delay in delivery. This characteristic is desirable for a Layer 3 protocol.
The mission of Layer 3
The mission of Layer 3 is to transport the packets between the hosts while placing as little burden on the network as possible. Layer 3 is not concerned with or even aware of the type of communication contained inside of a packet. This responsibility is the role of the upper layers as required. The upper layers can decide if the communication between services needs reliability and if this communication can tolerate the overhead reliability requires.
Media Independent
The Network layer is also not burdened with the characteristics of the media on which packets will be transported. IPv4 and IPv6 operate independently of the media that carry the data at lower layers of the protocol stack. Any individual IP packet can be communicated electrically over cable, as optical signals over fiber, or wirelessly as radio signals.
Maximum Transmission Unit (MTU)
The maximum size of PDU that each medium can transport.

Part of the control communication between the Data Link layer and the Network layer is the establishment of a maximum size for the packet. The Data Link layer passes the MTU upward to the Network layer. The Network layer then determines how large to create the packets.
Packet Fragmentation
In some cases, an intermediary device - usually a router - will need to split up a packet when forwarding it from one media to a media with a smaller MTU. This process is called fragmenting the packet or fragmentation.
Packet Encapsulation
IPv4 encapsulates, or packages, the Transport layer segment or datagram so that the network can deliver it to the destination host. The IPv4 encapsulation remains in place from the time the packet leaves the Network layer of the originating host until it arrives at the Network layer of the destination host.
IPv4 Header Fields
6 key fields:

• IP Source Address

• IP Destination Address

• Time-to-Live (TTL)

• Type-of-Service (ToS)

• Protocol

• Fragment Offset
Time-to-Live
The Time-to-Live (TTL) is an 8-bit binary value that indicates the remaining "life" of the packet. The TTL value is decreased by at least one each time the packet is processed by a router (that is, each hop).

When the value becomes zero, the router discards or drops the packet and it is removed from the network data flow. This mechanism prevents packets that cannot reach their destination from being forwarded indefinitely between routers in a routing loop. If routing loops were permitted to continue, the network would become congested with data packets that will never reach their destination. Decrementing the TTL value at each hop ensures that it eventually becomes zero and that the packet with the expired TTL field will be dropped.
Type-of-Service
IPv4 Header Field
The Type-of-Service field contains an 8-bit binary value that is used to determine the priority of each packet. This value enables a Quality-of-Service (QoS) mechanism to be applied to high priority packets, such as those carrying telephony voice data. The router processing the packets can be configured to decide which packet it is to forward first based on the Type-of-Service value.
Fragment Offset
IPv4 Header Field
As mentioned earlier, a router may have to fragment a packet when forwarding it from one medium to another medium that has a smaller MTU. When fragmentation occurs, the IPv4 packet uses the Fragment Offset field and the MF flag in the IP header to reconstruct the packet when it arrives at the destination host. The fragment offset field identifies the order in which to place the packet fragment in the reconstruction.
More Fragments flag
IPv4 Header Field
The More Fragments (MF) flag is a single bit in the Flag field used with the Fragment Offset for the fragmentation and reconstruction of packets. The More Fragments flag bit is set, it means that it is not the last fragment of a packet.
Don't Fragment flag
IPv4 Header Field
The Don't Fragment (DF) flag is a single bit in the Flag field that indicates that fragmentation of the packet is not allowed. If the Don't Fragment flag bit is set, then fragmentation of this packet is NOT permitted. If a router needs to fragment a packet to allow it to be passed downward to the Data Link layer but the DF bit is set to 1, then the router will discard this packet.
Purpose of Segmenting a network
As networks grow larger they present problems that can be at least partially alleviated by dividing the network into smaller interconnected networks.

Common issues with large networks are:
• Performance degradation
• Security issues
• Address Management
Broadcast
A broadcast is a message sent from one host to all other hosts on the network. Typically, a host initiates a broadcast when information about another unknown host is required. Broadcasts are a necessary and useful tool used by protocols to enable data communication on networks.
Broadcast Domain
Broadcasts are contained within a network. In this context, a network is also known as a broadcast domain. Managing the size of broadcast domains by dividing a network into subnets ensures that network and host performances are not degraded to unacceptable levels.
Gateway Address
The hosts only need to know the address of an intermediary device, to which they send packets for all other destinations addresses. This intermediary device is called a gateway. The gateway is a router on a network that serves as an exit from that network.
Hierarchical Addressing
To be able to divide networks, we need hierarchical addressing. A hierarchical address uniquely identifies each host. It also has levels that assist in forwarding packets across internetworks, which enables a network to be divided based on those levels.
IPv4 Address
The logical 32-bit IPv4 address is hierarchical and is made up of two parts:
• the first part identifies the network
• the second part identifies a host on that network.
• Both parts are required for a complete IP address.

For convenience IPv4 addresses are divided in four groups of eight bits (octets). Each octet is converted to its decimal value and the complete address written as the four decimal values separated by a dot (period).
Prefix Length
The number of bits of an address used as the network portion is called the prefix length
Next-Hop Address
The router also needs a route that defines where to forward the packet next. This is called the next-hop address. If a route is available to the router, the router will forward the packet to the next-hop router that offers a path to the destination network.
Routing (definition)
A router makes a forwarding decision for each packet that arrives at the gateway interface. This forwarding process is referred to as routing. To forward a packet to a destination network, the router requires a route to that network. If a route to a destination network does not exist, the packet cannot be forwarded.
Routing Tables
The routing table stores information about connected and remote networks. Connected networks are directly attached to one of the router interfaces. These interfaces are the gateways for the hosts on different local networks.

Routes in a routing table have three main features:
• Destination network
• Next-hop
• Metric
Host Routing Table
A host creates the routes used to forward the packets it originates. These routes are derived from the connected network and the configuration of the default gateway.

Hosts automatically add all connected networks to the routes. These routes for the local networks allow packets to be delivered to hosts that are connected to these networks.
Default Route
A router can be configured to have a default route. A default route is a route that will match all destination networks. In IPv4 networks, the address 0.0.0.0 is used for this purpose.

The default route is used to forward packets for which there is no entry in the routing table for the destination network. Packets with a destination network address that does not match a more specific route in the routing table are forwarded to the next-hop router associated with the default route.
Next-Hop Address
A next-hop is the address of the device that will process the packet next. For a host on a network, the address of the default gateway (router interface) is the next-hop for all packets destined for another network.

Some routes can have multiple next-hops. This indicates that there are multiple paths to the same destination network. These are parallel routes that the router can use to forward packets.
Packet Forwarding
Routing is done packet-by-packet and hop-by-hop. Each packet is treated independently in each router along the path. At each hop, the router examines the destination IP address for each packet and then checks the routing table for forwarding information.

The router will do one of three things with the packet:
• Forward it to the next-hop router
• Forward it to the destination host
• Drop it
Static Routing
Routes to remote networks with the associated next hops can be manually configured on the router. This is known as static routing. A default route can also be statically configured.
Dynamic Routing
Routing protocols are the set of rules by which routers dynamically share their routing information. As routers become aware of changes to the networks for which they act as the gateway, or changes to links between routers, this information is passed on to other routers. When a router receives information about new or changed routes, it updates its own routing table and, in turn, passes the information to other routers. In this way, all routers have accurate routing tables that are updated dynamically and can learn about routes to remote networks that are many hops way.
Octet
Binary patterns representing IPv4 addresses are expressed as dotted decimals by separating each byte of the binary pattern, called an octet, with a dot. It is called an octet because each decimal number represents one byte or 8 bits.
Radix
In the binary numbering system, the radix is 2. Therefore, each position represents increasing powers of 2
Types of IPv4 Addresses
Within the address range of each IPv4 network, we have three types of addresses:
• Network address - The address by which we refer to the network
• Broadcast address - A special address used to send data to all hosts in the network
• Host addresses - The addresses assigned to the end devices in the network
Broadcast Address
The IPv4 broadcast address is a special address for each network that allows communication to all the hosts in that network. To send data to all hosts in a network, a host can send a single packet that is addressed to the broadcast address of the network.

The broadcast address uses the highest address in the network range.
IPv4 Network Communication Types
In an IPv4 network, the hosts can communicate one of three different ways:
• Unicast - the process of sending a packet from one host to an individual host
• Broadcast - the process of sending a packet from one host to all hosts in the network
• Multicast - the process of sending a packet from one host to a selected group of hosts
Unicast Traffic
Unicast communication is used for the normal host-to-host communication in both a client/server and a peer-to-peer network.

Unicast packets use the host address of the destination device as the destination address and can be routed through an internetwork. In an IPv4 network, the unicast address applied to an end device is referred to as the host address.
Directed Broadcast
A directed broadcast is sent to all hosts on a specific network. This type of broadcast is useful for sending a broadcast to all hosts on a non-local network. Although routers do not forward directed broadcasts by default, they may be configured to do so.
Limited Broadcast
The limited broadcast is used for communication that is limited to the hosts on the local network. These packets use a destination IPv4 address 255.255.255.255. Routers do not forward this broadcast. Packets addressed to the limited broadcast address will only appear on the local network. For this reason, an IPv4 network is also referred to as a broadcast domain. Routers form the boundary for a broadcast domain.
Experimental Addresses
One major block of addresses reserved for special purposes is the IPv4 experimental address range 240.0.0.0 to 255.255.255.254. Currently, these addresses are listed as reserved for future use (RFC 3330). This suggests that they could be converted to usable addresses. Currently, they cannot be used in IPv4 networks. However, these addresses could be used for research or experimentation.
Multicast Addresses
A major block of addresses reserved for special purposes is the IPv4 multicast address range 224.0.0.0 to 239.255.255.255.

224.0.0.0 >= & < 240.0.0.0
Reserved Link Local Addresses
The IPv4 multicast addresses 224.0.0.0 to 224.0.0.255 are reserved link local addresses. These addresses are to be used for multicast groups on a local network.

Packets to these destinations are always transmitted with a time-to-live (TTL) value of 1. Therefore, a router connected to the local network should never forward them.

A typical use of reserved link-local addresses is in routing protocols using multicast transmission to exchange routing information.
Globally Scoped Addresses
The globally scoped addresses are 224.0.1.0 to 238.255.255.255. They may be used to multicast data across the Internet
Network Time Protocol (NTP)
224.0.1.1 has been reserved for Network Time Protocol (NTP) to synchronize the time-of-day clocks of network devices
Private Addresses
Blocks of addresses that are used in networks that require limited or no Internet access.
Private Address Blocks
The private address blocks are:
• 10.0.0.0 to 10.255.255.255 (10.0.0.0 /8)
• 172.16.0.0 to 172.31.255.255 (172.16.0.0 /12)
• 192.168.0.0 to 192.168.255.255 (192.168.0.0 /16)
Network Address Translation (NAT)
Services to translate private addresses to public addresses, hosts on a privately addressed network can have access to resources across the Internet. These services are implemented on a device at the edge of the private network.

NAT allows the hosts in the network to "borrow" a public address for communicating to outside networks. While there are some limitations and performance issues with NAT, clients for most applications can access services over the Internet without noticeable problems.
Loopback
IPv4 loopback address is 127.0.0.1. The loopback is a special address that hosts use to direct traffic to themselves. The loopback address creates a shortcut method for TCP/IP applications and services that run on the same device to communicate with one another. By using the loopback address instead of the assigned IPv4 host address, two services on the same host can bypass the lower layers of the TCP/IP stack. You can also ping the loopback address to test the configuration of TCP/IP on the local host.

Although only the single 127.0.0.1 address is used, addresses 127.0.0.0 to 127.255.255.255 are reserved. Any address within this block will loop back within the local host. No address within this block should ever appear on any network.
Link-Local Addresses
169.254.0.0 to 169.254.255.255 (169.254.0.0 /16)

IPv4 addresses in the address block 169.254.0.0 to 169.254.255.255 (169.254.0.0 /16) are designated as link-local addresses. These addresses can be automatically assigned to the local host by the operating system in environments where no IP configuration is available. These might be used in a small peer-to-peer network or for a host that could not automatically obtain an address from a Dynamic Host Configuration Protocol (DHCP) server.
Classful Addressing
The unicast address classes A, B, and C defined specifically-sized networks as well as specific address blocks for these networks, as shown in the figure. A company or organization was assigned an entire class A, class B, or class C address block.
Class A Addresses
A class A address block was designed to support extremely large networks with more than 16 million host addresses. Class A IPv4 addresses used a fixed /8 prefix with the first octet to indicate the network address. The remaining three octets were used for host addresses.

1 - 127
Class B Addresses
Class B address space was designed to support the needs of moderate to large size networks with more than 65,000 hosts. A class B IP address used the two high-order octets to indicate the network address. The other two octets specified host addresses. As with class A, address space for the remaining address classes needed to be reserved.

128 - 191
Class C Addresses
192 - 223

The class C address space was the most commonly available of the historic address classes. This address space was intended to provide addresses for small networks with a maximum of 254 hosts.

Class C address blocks used a /24 prefix. This meant that a class C network used only the last octet as host addresses with the three high-order octets used to indicate the network address.
IPv6 Features
To provide these features, IPv6 offers:
• 128-bit hierarchical addressing - to expand addressing capabilities
• Header format simplification - to improve packet handling
• Improved support for extensions and options - for increased scalability/longevity and improved packet handling
• Flow labeling capability - as QoS mechanisms
• Authentication and privacy capabilities - to integrate security
IPv6 Address Size
128 Bits or 16 Octets. An IPv6 address is represented as eight groups of four hexadecimal digits, each group representing 16 bits (two octets). The groups are separated by colons (:)
Subnet Mask Patterns
These patterns are:
• 00000000 = 0
• 10000000 = 128
• 11000000 = 192
• 11100000 = 224
• 11110000 = 240
• 11111000 = 248
• 11111100 = 252
• 11111110 = 254
• 11111111 = 255
ANDing
The IPv4 host address is logically ANDed with its subnet mask to determine the network address to which the host is associated. When this ANDing between the address and the subnet mask is performed, the result yields the network address.
Variable Length Subnet Mask (VLSM)
The capability to specify a different subnet mask for the same Class A, B, or C network number on different subnets. VLSM can help optimize available address space.
Ping
Ping is a utility for testing IP connectivity between hosts. Ping sends out requests for responses from a specified host address. Ping uses a Layer 3 protocol that is a part on the TCP/IP suite called Internet Control Message Protocol (ICMP). Ping uses an ICMP Echo Request datagram.
ICMP Echo Request datagram
ICMP Echo Request datagram is the datagram used for pinging. When a host receives it, they respond with an ICMP Echo Reply datagram
Traceroute (tracert)
Traceroute (tracert) is a utility that allows us to observe the path between these hosts. The trace generates a list of hops that were successfully reached along the path.

This list can provide us with important verification and troubleshooting information. If the data reaches the destination, then the trace lists the interface on every router in the path.
Purpose of the Data Link layer
The Data Link layer performs two basic services:

• Allows the upper layers to access the media using techniques such as framing

• Controls how data is placed onto the media and is received from the media using techniques such as media access control and error detection
Media Access Control Method
The technique used for getting the frame on and off media is called the media access control method. For the data to be transferred across a number of different media, different media access control methods may be required during the course of a single communication
Node
A node that is an end device uses an adapter to make the connection to the network. For example, to connect to a LAN, the device would use the appropriate Network Interface Card (NIC) to connect to the LAN media. The adapter manages the framing and media access control.
Data-Link Frame
The description of a frame is a key element of each Data Link layer protocol. Data Link layer protocols require control information to enable the protocols to function. Control information may tell:
• Which nodes are in communication with each other
• When communication between individual nodes begins and when it ends
• Which errors occurred while the nodes communicated
• Which nodes will communicate next
Frame Field Types
Typical field types include:
• Start and stop indicator fields - The beginning and end limits of the frame
• Naming or addressing fields
• Type field - The type of PDU contained in the frame
• Control - Flow control services
• A data field -The frame payload (Network layer packet)
• Error Control
Logical Link Control
Logical Link Control (LLC) places information in the frame that identifies which Network layer protocol is being used for the frame. This information allows multiple Layer 3 protocols, such as IP and IPX, to utilize the same network interface and media.
Media Access Control
Media Access Control (MAC) provides Data Link layer addressing and delimiting of data according to the physical signaling requirements of the medium and the type of Data Link layer protocol in use.
Basic Media Access Types
There are two basic media access control methods for shared media:

Controlled - Each node has its own time to use the medium

Contention-based - All nodes compete for the use of the medium
Controlled Access Method
When using the controlled access method, network devices take turns, in sequence, to access the medium. This method is also known as scheduled access or deterministic. If a device does not need to access the medium, the opportunity to use the medium passes to the next device in line. When one device places a frame on the media, no other device can do so until the frame has arrived at the destination and has been processed by the destination.
Scheduled Access or Deterministic
Controlled Media Access Methods
Contention-Based
Contention-based methods allow any device to try to access the medium whenever it has data to send. also know as non-deterministic.
Non-Deterministic
Refers to contention based media access methods where any device can attempt to access the medium when it has data to send
Carrier Sense Multiple Access (CSMA)
A Carrier Sense Multiple Access (CSMA) process is used to first detect if the media is carrying a signal. If a carrier signal on the media from another node is detected, it means that another device is transmitting. When the device attempting to transmit sees that the media is busy, it will wait and try again after a short time period. If no carrier signal is detected, the device transmits its data. Ethernet and wireless networks use contention-based media access control.
CSMA/Collision Detection (CSMA/CD)
In CSMA/Collision Detection (CSMA/CD), the device monitors the media for the presence of a data signal. If a data signal is absent, indicating that the media is free, the device transmits the data. If signals are then detected that show another device was transmitting at the same time, all devices stop sending and try again later. Traditional forms of Ethernet use this method.
CSMA/Collision Avoidance (CSMA/CA)
In CSMA/Collision Avoidance (CSMA/CA), the device examines the media for the presence of a data signal. If the media is free, the device sends a notification across the media of its intent to use it. The device then sends the data. This method is used by 802.11 wireless networking technologies.
point-to-point topologies
In point-to-point topologies, the media interconnects just two nodes. In this arrangement, the nodes do not have to share the media with other hosts or determine if a frame is destined for that node. Therefore, Data Link layer protocols have little to do for controlling non-shared media access.
Full Duplex
In full-duplex communication, both devices can transmit and receive on the media at the same time. The Data Link layer assumes that the media is available for transmission for both nodes at any time. Therefore, there is no media arbitration necessary in the Data Link layer.
Half Duplex
Half-duplex communication means that the devices can both transmit and receive on the media but cannot do so simultaneously. Ethernet has established arbitration rules for resolving conflicts arising from instances when more than one station attempts to transmit at the same time.
Physical Topology
The physical topology is an arrangement of the nodes and the physical connections between them. The representation of how the media is used to interconnect the devices is the physical topology.
Logical Topology
A logical topology is the way a network transfers frames from one node to the next. This arrangement consists of virtual connections between the nodes of a network independent of their physical layout. These logical signal paths are defined by Data Link layer protocols. The Data Link layer "sees" the logical topology of a network when controlling data access to the media. It is the logical topology that influences the type of network framing and media access control used.
Virtual Circuit
A virtual circuit is a logical connection created within a network between two network devices. The two nodes on either end of the virtual circuit exchange the frames with each other. This occurs even if the frames are directed through intermediary devices. Virtual circuits are important logical communication constructs used by some Layer 2 technologies.
Multi-Access Topology
A logical multi-access topology enables a number of nodes to communicate by using the same shared media. Data from only one node can be placed on the medium at any one time. Every node sees all the frames that are on the medium, but only the node to which the frame is addressed processes the contents of the frame.
Ring Topology
In a logical ring topology, each node in turn receives a frame. If the frame is not addressed to the node, the node passes the frame to the next node. This allows a ring to use a controlled media access control technique called token passing.

Nodes in a logical ring topology remove the frame from the ring, examine the address, and send it on if it is not addressed for that node. In a ring, all nodes around the ring- between the source and destination node examine the frame.
Token Passing
Used in Ring Topologies to designate when a host may access the media. The token is passed from host to host.
Frame Addressing
The data Link layer provides addressing that is used in transporting the frame across the shared local media. Device addresses at this layer are referred to as physical addresses. Data Link layer addressing is contained within the frame header and specifies the frame destination node on the local network. The frame header may also contain the source address of the frame.
Frame Trailers
Data Link layer protocols add a trailer to the end of each frame. The trailer is used to determine if the frame arrived without error. This process is called error detection. Note that this is different from error correction.

Error detection is accomplished by placing a logical or mathematical summary of the bits that comprise the frame in the trailer.
Frame Check Sequence (FCS)
The Frame Check Sequence (FCS) field is used to determine if errors occurred in the transmission and reception of the frame. The error detection mechanism provided by the use of the FCS field discovers most errors caused on the media.
Cyclic Redundancy Check (CRC)
When the frame arrives at the destination node, the receiving node calculates its own logical summary, or CRC, of the frame. The receiving node compares the two CRC values. If the two values are the same, the frame is considered to have arrived as transmitted. If the CRC value in the FCS differs from the CRC calculated at the receiving node, the frame is discarded.
Point-to-Point Protocol (PPP)
Point-to-Point Protocol (PPP) is a protocol used to deliver frames between two nodes. Unlike many Data Link layer protocols that are defined by electrical engineering organizations, the PPP standard is defined by RFCs. PPP was developed as a WAN protocol and remains the protocol of choice to implement many serial WANs.
Standard IEEE 802.11
The Standard IEEE 802.11, commonly referred to as Wi-Fi, is a contention-based system using a Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) media access process.
Role of the Physical Layer
The role of the OSI Physical layer is to encode the binary digits that represent Data Link layer frames into signals and to transmit and receive these signals across the physical media - copper wires, optical fiber, and wireless - that connect network devices.
Encoding
Encoding is a method of converting a stream of data bits into a predefined code. Codes are groupings of bits used to provide a predictable pattern that can be recognized by both the sender and the received. Using predictable patterns helps to distinguish data bits from control bits and provide better media error detection.

In addition to creating codes for data, encoding methods at the Physical layer may also provide codes for control purposes such as identifying the beginning and end of a frame. The transmitting host will transmit the specific pattern of bits or a code to identify the beginning and end of the frame.
Signaling

(Signalling Method)
The Physical layer must generate the electrical, optical, or wireless signals that represent the "1" and "0" on the media. The method of representing the bits is called the signaling method. The Physical layer standards must define what type of signal represents a "1" and a "0". This can be as simple as a change in the level of an electrical signal or optical pulse or a more complex signaling method.
Bit Time
Each signal placed onto the media has a specific amount of time to occupy the media. This is referred to as its bit time. Signals are processed by the receiving device and returned to its representation as bits.
Synchronization
Successful delivery of the bits requires some method of synchronization between transmitter and receiver. The signals representing the bits must be examined at specific times during the bit time to properly determine if the signal represents a "1" or a "0". The synchronization is accomplished by the use of a clock. In LANs, each end of the transmission maintains its own clock. Many signaling methods use predictable transitions in the signal to provide synchronization between the clocks of the transmitting and the receiving devices.
Signaling Methods
Bits are represented on the medium by changing one or more of the following characteristics of a signal:
• Amplitude
• Frequency
• Phase
Non-Return to Zero (NRZ)
With Non-Return to Zero (NRZ), a 0 may be represented by one voltage level on the media during the bit time and a 1 might be represented by a different voltage on the media during the bit time.
Manchester Encoding
There are also methods of signaling that use transitions, or the absence of transitions, to indicate a logic level. Manchester Encoding indicates a 0 by a high to low voltage transition in the middle of the bit time. For a 1 there is a low to high voltage transition in the middle of the bit time.
Encoding
Encoding represents the symbolic grouping of bits prior to being presented to the media. By using an encoding step before the signals are placed on the media, we improve the efficiency at higher speed data transmission. By using the coding groups, we can detect errors more efficiently.
Signal Patterns
One way to provide frame detection is to begin each frame with a pattern of signals representing bits that the Physical layer recognizes as denoting the start of a frame. Another pattern of bits will signal the end of the frame. Signal bits not framed in this manner are ignored by the Physical layer standard being used

Signal patterns can indicate: start of frame, end of frame, and frame contents. These signal patterns can be decoded into bits. The bits are interpreted as codes. The codes indicate where the frames start and stop.
Code Groups
Encoding techniques use bit patterns called symbols. The Physical layer may use a set of encoded symbols - called code groups - to represent encoded data or control information.

A code group is a consecutive sequence of code bits that are interpreted and mapped as data bit patterns. Code groups are often used as an intermediary encoding technique for higher speed LAN technologies.
Advantages using code groups
Advantages using code groups include:

• Reducing bit level error

• Limiting the effective energy transmitted into the media

• Helping to distinguish data bits from control bits

• Better media error detection
DC balancing
Limiting Energy Transmitted - In many code groups, the symbols ensure that the number of 1s and 0s in a string of symbols are evenly balanced. The process of balancing the number of 1s and 0s transmitted is called DC balancing. This prevents excessive amounts of energy from being injected into the media during transmission, thereby reducing the interference radiated from the media.
Types of symbols in Code Groups
The code groups have three types of symbols:

• Data symbols - Symbols that represent the data of the frame as it is passed down to the Physical layer.

• Control symbols - Special codes injected by the Physical layer used to control transmission. These include end-of-frame and idle media symbols.

• Invalid symbols - Symbols that have patterns not allowed on the media. The receipt of an invalid symbol indicates a frame error.
Data symbols
Data symbols - Symbols that represent the data of the frame as it is passed down to the Physical layer.
Control symbols
Control symbols - Special codes injected by the Physical layer used to control transmission. These include end-of-frame and idle media symbols.
Invalid symbols
Invalid symbols - Symbols that have patterns not allowed on the media. The receipt of an invalid symbol indicates a frame error.
4B/5B
In this technique, 4 bits of data are turned into 5-bit code symbols for transmission over the media system. In 4B/5B, each byte to be transmitted is broken into four-bit pieces or nibbles and encoded as five-bit values known as symbols. These symbols represent the data to be transmitted as well as a set of codes that help control transmission on the media.
Bandwidth
The capacity of a medium to carry data is described as the raw data bandwidth of the media. Digital bandwidth measures the amount of information that can flow from one place to another in a given amount of time. Bandwidth is typically measured in kilobits per second (kbps) or megabits per second (Mbps).

The practical bandwidth of a network is determined by a combination of factors: the properties of the physical media and the technologies chosen for signaling and detecting network signals.
Throughput
Throughput is the measure of the transfer of bits across the media over a given period of time. Due to a number of factors, throughput usually does not match the specified bandwidth in Physical layer implementations such as Ethernet. Among these factors are the amount of traffic, the type of traffic, and the number of network devices encountered on the network being measured.
Goodput
The measure of the transfer of usable data. That measure is known as goodput. Goodput is the measure of usable data transferred over a given period of time, and is therefore the measure that is of most interest to network users. Goodput is throughput minus traffic overhead for establishing sessions, acknowledgements, and encapsulation.
Noise
The timing and voltage values of these signals are susceptible to interference or "noise" from outside the communications system. These unwanted signals can distort and corrupt the data signals being carried by copper media. Radio waves and electromagnetic devices such as fluorescent lights, electric motors, and other devices are potential sources of noise.
Unshielded twisted-pair (UTP)
Unshielded twisted-pair (UTP) cabling, as it is used in Ethernet LANs, consists of four pairs of color-coded wires that have been twisted together and then encased in a flexible plastic sheath. As seen in the figure, the color codes identify the individual pairs and wires in the pairs and aid in cable termination.

The twisting has the effect of canceling unwanted signals. When two wires in an electrical circuit are placed close together, external electromagnetic fields create the same interference in each wire.
Crosstalk
Crosstalk is the interference caused by the magnetic field around the adjacent pairs of wires in the cable. When electrical current flows through a wire, it creates a circular magnetic field around the wire. With the current flowing in opposite directions in the two wires in a pair, the magnetic fields - as equal but opposite forces - have a cancellation effect on each other.
Construction of Coaxial cables
Coaxial cable consists of a copper conductor surrounded by a layer of flexible insulation. Over this insulating material is a woven copper braid, or metallic foil, that acts as the second wire in the circuit and as a shield for the inner conductor.

This second layer, or shield, also reduces the amount of outside electromagnetic interference. Covering the shield is the cable jacket. All the elements of the coaxial cable encircle the center conductor. Because they all share the same axis, this construction is called coaxial, or coax for short.
Shielded Twisted-Pair (STP) Cable
Another type of cabling used in networking is shielded twisted-pair (STP). STP uses four pairs of wires that are wrapped in an overall metallic braid or foil. STP cable shields the entire bundle of wires within the cable as well as the individual wire pairs. STP provides better noise protection than UTP cabling, however at a significantly higher price.
Fiber Media
Fiber-optic cabling uses either glass or plastic fibers to guide light impulses from source to destination. The bits are encoded on the fiber as light impulses. Optical fiber cabling is capable of very large raw data bandwidth rates. Most current transmission standards have yet to approach the potential bandwidth of this media.
Optical Cable Construction
Optical fiber cables consist of a PVC jacket and a series of strengthening materials that surround the optical fiber and its cladding. The cladding surrounds the actual glass or plastic fiber and is designed to prevent light loss from the fiber.

Light can only travel in one direction over optical fiber, two fibers are required to support full duplex operation. Fiber-optic patch cables bundle together two optical fiber cables and terminate them with a pair of standard single fiber connectors. Some fiber connectors accept both the transmitting and receiving fibers in a single connector.
Single-mode optical fiber
Single-mode optical fiber carries a single ray of light, usually emitted from a laser. Because the laser light is uni-directional and travels down the center of the fiber, this type of fiber can transmit optical pulses for very long distances.
Multimode fiber
Multimode fiber typically uses LED emitters that do not create a single coherent light wave. Instead, light from an LED enters the multimode fiber at different angles.

Because light entering the fiber at different angles takes different amounts of time to travel down the fiber, long fiber runs may result in the pulses becoming blurred on reception at the receiving end. This effect, known as modal dispersion, limits the length of multimode fiber segments.
Modal Dispersion
With multimode fiber, the signals are generated with LEDs that bounce off the cable at different angels. Over long distances, these signals can get blurred. This limits the length of multimode fiber segments.
Fiber-optic connector types
Fiber-optic connectors come in a variety of types. The figure shows some of the most common:

Straight-Tip (ST) (trademarked by AT&T) - a very common bayonet style connector widely used with multimode fiber.

Subscriber Connector (SC) - a connector that uses a push-pull mechanism to ensure positive insertion. This connector type is widely used with single-mode fiber.

Lucent Connector (LC) - A small connector becoming popular for use with single-mode fiber and also supports multi-mode fiber.
Three common types of fiber-optic termination and splicing errors
Three common types of fiber-optic termination and splicing errors are:

• Misalignment - the fiber-optic media are not precisely aligned to one another when joined.

• End gap - the media do not completely touch at the splice or connection.

• End finish - the media ends are not well polished or dirt is present at the termination.
Optical Time Domain Reflectometer (OTDR)
It is recommended that an Optical Time Domain Reflectometer (OTDR) be used to test each fiber-optic cable segment. This device injects a test pulse of light into the cable and measures back scatter and reflection of light detected as a function of time. The OTDR will calculate the approximate distance at which these faults are detected along the length of the cable.
Standard IEEE 802.11
Standard IEEE 802.11 - Commonly referred to as Wi-Fi, is a Wireless LAN (WLAN) technology that uses a contention or non-deterministic system with a Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) media access process.
Standard IEEE 802.15
Standard IEEE 802.15 - Wireless Personal Area Network (WPAN) standard, commonly known as "Bluetooth", uses a device pairing process to communicate over distances from 1 to 100 meters.
Standard IEEE 802.16
Standard IEEE 802.16 - Commonly known as Worldwide Interoperability for Microwave Access (WiMAX), uses a point-to-multipoint topology to provide wireless broadband access.
Global System for Mobile Communications (GSM)
Global System for Mobile Communications (GSM) - Includes Physical layer specifications that enable the implementation of the Layer 2 General Packet Radio Service (GPRS) protocol to provide data transfer over mobile cellular telephony networks.
Wireless Access Point (AP)
Wireless Access Point (AP) - Concentrates the wireless signals from users and connects, usually through a copper cable, to the existing copper-based network infrastructure such as Ethernet.
Wireless NIC adapters
Wireless NIC adapters - Provides wireless communication capability to each network host.
IEEE 802.11a
IEEE 802.11a - Operates in the 5 GHz frequency band and offers speeds of up to 54 Mbps. Because this standard operates at higher frequencies, it has a smaller coverage area and is less effective at penetrating building structures. Devices operating under this standard are not interoperable with the 802.11b and 802.11g standards.
IEEE 802.11b
IEEE 802.11b - Operates in the 2.4 GHz frequency band and offers speeds of up to 11 Mbps. Devices implementing this standard have a longer range and are better able to penetrate building structures than devices based on 802.11a.
IEEE 802.11g
IEEE 802.11g - Operates in the 2.4 GHz frequency band and offers speeds of up to 54 Mbps. Devices implementing this standard therefore operate at the same radio frequency and range as 802.11b but with the bandwidth of 802.11a.
IEEE 802.11n
IEEE 802.11n - The IEEE 802.11n standard is currently in draft form. The proposed standard defines frequency of 2.4 Ghz or 5 GHz. The typical expected data rates are 100 Mbps to 210 Mbps with a distance range of up to 70 meters.
DIX Ethernet
The first LAN in the world was the original version of Ethernet. Robert Metcalfe and his coworkers at Xerox designed it more than thirty years ago. The first Ethernet standard was published in 1980 by a consortium of Digital Equipment Corporation, Intel, and Xerox (DIX).
Ethernet operates at which layers of the OSI Model? TCP/IP?
Ethernet operates across two layers of the OSI model. The model provides a reference to which Ethernet can be related but it is actually implemented in the lower half of the Data Link layer, which is known as the Media Access Control (MAC) sublayer, and the Physical layer only.
2 layers of Ethernet
Ethernet separates the functions of the Data Link layer into two distinct sublayers: the Logical Link Control (LLC) sublayer and the Media Access Control (MAC) sublayer. The functions described in the OSI model for the Data Link layer are assigned to the LLC and MAC sublayers. The use of these sublayers contributes significantly to compatibility between diverse end devices.
Logical Link Control
Logical Link Control handles the communication between the upper layers and the networking software, and the lower layers, typically the hardware. The LLC sublayer takes the network protocol data, which is typically an IPv4 packet, and adds control information to help deliver the packet to the destination node. Layer 2 communicates with the upper layers through LLC. LLC can be considered the driver software for the Network Interface Card (NIC).
Media Access Control (MAC)
Media Access Control (MAC) is the lower Ethernet sublayer of the Data Link layer. Media Access Control is implemented by hardware, typically in the computer Network Interface Card (NIC). The Ethernet MAC sublayer has two primary responsibilities: Data Encapsulation and Media Access Control.
Ethernet Data Encapsulation
Data encapsulation provides three primary functions:
• Frame delimiting
• Addressing
• Error detection
Media Access Control
The MAC sublayer controls the placement of frames on the media and the removal of frames from the media. As its name implies, it manages the media access control. This includes the initiation of frame transmission and recovery from transmission failure due to collisions.
Ethernet Logical Topology

multi-access bus
The underlying logical topology of Ethernet is a multi-access bus. This means that all the nodes (devices) in that network segment share the medium. This further means that all the nodes in that segment receive all the frames transmitted by any node on that segment.

Because all the nodes receive all the frames, each node needs to determine if a frame is to be accepted and processed by that node. This requires examining the addressing in the frame provided by the MAC address.
Alohanet
The foundation for Ethernet technology was first established in 1970 with a program called Alohanet. Alohanet was a digital radio network designed to transmit information over a shared radio frequency between the Hawaiian Islands.
10BASE5, or Thicknet
10BASE5, or Thicknet, used a thick coaxial that allowed for cabling distances of up to 500 meters before the signal required a repeater.
10BASE2, or Thinnet
10BASE2, or Thinnet, used a thin coaxial cable that was smaller in diameter and more flexible than Thicknet and allowed for cabling distances of 185 meters.
Gigabit Ethernet
Gigabit Ethernet is used to describe Ethernet implementations that provide bandwidth of 1000 Mbps (1 Gbps) or greater. This capacity has been built on the full-duplex capability and the UTP and fiber-optic media technologies of earlier Ethernet.
Metropolitan Area Network (MAN)
The increased cabling distances enabled by the use of fiber-optic cable in Ethernet-based networks has resulted in a blurring of the distinction between LANs and WANs. Ethernet was initially limited to LAN cable systems within single buildings, and then extended to between buildings. It can now be applied across a city in what is known as a Metropolitan Area Network (MAN).
Ethernet Frame Size
Both the Ethernet II and IEEE 802.3 standards define the minimum frame size as 64 bytes and the maximum as 1518 bytes. This includes all bytes from the Destination MAC Address field through the Frame Check Sequence (FCS) field.

The Preamble and Start Frame Delimiter fields are not included when describing the size of a frame.
Preamble and Start Frame Delimiter Fields

Ethernet Frame
The Preamble (7 bytes) and Start Frame Delimiter (SFD) (1 byte) fields are used for synchronization between the sending and receiving devices. These first eight bytes of the frame are used to get the attention of the receiving nodes. Essentially, the first few bytes tell the receivers to get ready to receive a new frame.
Length/Type Field

Ethernet Frame Field
For any IEEE 802.3 standard earlier than 1997 the Length field defines the exact length of the frame's data field. This is used later as part of the FCS to ensure that the message was received properly. If the purpose of the field is to designate a type as in Ethernet II, the Type field describes which protocol is implemented.
Data and Pad Fields

Ethernet Frame field
The Data and Pad fields (46 - 1500 bytes) contains the encapsulated data from a higher layer, which is a generic Layer 3 PDU, or more commonly, an IPv4 packet. All frames must be at least 64 bytes long. If a small packet is encapsulated, the Pad is used to increase the size of the frame to this minimum size.
Minimum length of Ethernet Data segment
The minimum length is 64 bytes
Maximum length of Ethernet Data segment
The maximum length is 1500 bytes
MAC Address

Media Access Control (MAC) Address - Physical Address - Burned In Address (BIA)
MAC address is a 48-bit binary value expressed as 12 hexadecimal digits.

A unique identifier called a Media Access Control (MAC) address was created to assist in determining the source and destination address within an Ethernet network. Regardless of which variety of Ethernet was used, the naming convention provided a method for device identification at a lower level of the OSI model.
Organizationally Unique Identifier (OUI)
The MAC address value is a direct result of IEEE-enforced rules for vendors to ensure globally unique addresses for each Ethernet device. The rules established by IEEE require any vendor that sells Ethernet devices to register with IEEE. The IEEE assigns the vendor a 3-byte code, called the Organizationally Unique Identifier (OUI).
Burned-In Address (BIA)
The MAC address is often referred to as a burned-in address (BIA) because it is burned into ROM (Read-Only Memory) on the NIC. This means that the address is encoded into the ROM chip permanently - it cannot be changed by software.

However, when the computer starts up, the NIC copies the address into RAM. When examining frames, it is the address in RAM that is used as the source address to compare with the destination address. The MAC address is used by the NIC to determine if a message should be passed to the upper layers for processing.
Hexadecimal ("Hex")
Hexadecimal ("Hex") is a convenient way to represent binary values. Just as decimal is a base ten numbering system and binary is base two, hexadecimal is a base sixteen system.

The base 16 numbering system uses the numbers 0 to 9 and the letters A to F. The figure shows the equivalent decimal, binary, and hexadecimal values for binary 0000 to 1111. It is easier for us to express a value as a single hexadecimal digit than as four bits.
Bytes
8 bits (a byte) is a common binary grouping, binary 00000000 to 11111111 can be represented in hexadecimal as the range 00 to FF. Leading zeroes are always displayed to complete the 8-bit representation. For example, the binary value 0000 1010 is shown in hexadecimal as 0A.
Representing Hexadecimal
Hexadecimal is usually represented in text by the value preceded by 0x (for example 0x73) or a subscript 16. Less commonly, it may be followed by an H, for example 73H. However, because subscript text is not recognized in command line or programming environments, the technical representation of hexadecimal is preceded with "0x" (zero X). Therefore, the examples above would be shown as 0x0A and 0x73 respectively
Unicast MAC Address
A unicast MAC address is the unique address used when a frame is sent from a single transmitting device to single destination device.
Broadcast MAC and IP Address
With a broadcast, the packet contains a destination IP address that has all ones (1s) in the host portion. This numbering in the address means that all hosts on that local network (broadcast domain) will receive and process the packet. On Ethernet networks, the broadcast MAC address is 48 ones displayed as Hexadecimal FF-FF-FF-FF-FF-FF.
Multicast MAC and IP Address
Multicast addresses allow a source device to send a packet to a group of devices. Devices that belong to a multicast group are assigned a multicast group IP address. The range of multicast addresses is from 224.0.0.0 to 239.255.255.255.

Multicast addresses represent a group of addresses (sometimes called a host group), they can only be used as the destination of a packet. The source will always have a unicast address.

The multicast MAC address is a special value that begins with 01-00-5E in hexadecimal. The value ends by converting the lower 23 bits of the IP multicast group address into the remaining 6 hexadecimal characters of the Ethernet address. The remaining bit in the MAC address is always a "0".
Hubs

Multi-port Repeaters
Hubs were created as intermediary network devices that enable more nodes to connect to the shared media. Also known as multi-port repeaters, hubs retransmit received data signals to all connected devices, except the one from which it received the signals.

Hubs and repeaters are intermediary devices that extend the distance that Ethernet cables can reach. Because hubs operate at the Physical layer, dealing only with the signals on the media, collisions can occur between the devices they connect and within the hubs themselves.
Collision Domain

Network Segment
The connected devices that access a common media via a hub or series of directly connected hubs make up what is known as a collision domain. A collision domain is also referred to as a network segment. Hubs and repeaters therefore have the effect of increasing the size of the collision domain.
Latency
The electrical signal that is transmitted takes a certain amount of time (latency) to propagate (travel) down the cable. Each hub or repeater in the signal's path adds latency as it forwards the bits from one port to the next.

This accumulated delay increases the likelihood that collisions will occur because a listening node may transition into transmitting signals while the hub or repeater is processing the message.
Preamble
In half-duplex mode, if a collision has not occurred, the sending device will transmit 64 bits of timing synchronization information, which is known as the Preamble.

The sending device will then transmit the complete frame.
Asynchronous Communication
Ethernet with throughput speeds of 10 Mbps and slower are asynchronous. An asynchronous communication in this context means that each receiving device will use the 8 bytes of timing information to synchronize the receive circuit to the incoming data and then discard the 8 bytes.
Synchronous Communication
Ethernet implementations with throughput of 100 Mbps and higher are synchronous. Synchronous communication in this context means that the timing information is not required.

However, for compatibility reasons, the Preamble and Start Frame Delimiter (SFD) fields are still present.
Bit Time
For each different media speed, a period of time is required for a bit to be placed and sensed on the media. This period of time is referred to as the bit time.
Slot Time
In half-duplex Ethernet, where data can only travel in one direction at once, slot time becomes an important parameter in determining how many devices can share a network. For all speeds of Ethernet transmission at or below 1000 Mbps, the standard describes how an individual transmission may be no smaller than the slot time.

The slot time ensures that if a collision is going to occur, it will be detected within the first 512 bits (4096 for Gigabit Ethernet) of the frame transmission.
Collision Fragment

Runt Frame
The 512-bit slot time establishes the minimum size of an Ethernet frame as 64 bytes. Any frame less than 64 bytes in length is considered a "collision fragment" or "runt frame" and is automatically discarded by receiving stations.
Interframe Spacing
The Ethernet standards require a minimum spacing between two non-colliding frames. This gives the media time to stabilize after the transmission of the previous frame and time for the devices to process the frame. Referred to as the interframe spacing, this time is measured from the last bit of the FCS field of one frame to the first bit of the Preamble of the next frame.
Jamming Signal
As soon as a collision is detected, the sending devices transmit a 32-bit "jam" signal that will enforce the collision. This ensures all devices in the LAN to detect the collision.

It is important that the jam signal not be detected as a valid frame; otherwise the collision would not be identified. The most commonly observed data pattern for a jam signal is simply a repeating 1, 0, 1, 0 pattern, the same as the Preamble.
Backoff Timing
After a collision occurs and all devices allow the cable to become idle (each waits the full interframe spacing), the devices whose transmissions collided must wait an additional - and potentially progressively longer - period of time before attempting to retransmit the collided frame.

The waiting period is intentionally designed to be random so that two stations do not delay for the same amount of time before retransmitting, which would result in more collisions. This is accomplished in part by expanding the interval from which the random retransmission time is selected on each retransmission attempt. The waiting period is measured in increments of the parameter slot time.
Ethernet PHY
Name for the Ethernet Physical Layer
10Base-T Ethernet Speed
10 Mbps
Fast Ethernet Speed
100 Mbps
Gigabit Ethernet Speed
1000 Mbps
10 Gigabit Ethernet Speed
10 Gbps
10 Mbps Ethernet - 10BASE-T
10BASE-T uses Manchester-encoding over two unshielded twisted-pair cables. Cat5 or later cabling is typically used today.

10 Mbps Ethernet is considered to be classic Ethernet and uses a physical star topology. Ethernet 10BASE-T links could be up to 100 meters in length before requiring a hub or repeater.

10BASE-T uses two pairs of a four-pair cable and is terminated at each end with an 8-pin RJ-45 connector. The pair connected to pins 1 and 2 are used for transmitting and the pair connected to pins 3 and 6 are used for receiving. The figure shows the RJ45 pinout used with 10BASE-T Ethernet.
100 Mbps - Fast Ethernet
In the mid to late 1990s, several new 802.3 standards were established to describe methods for transmitting data over Ethernet media at 100 Mbps. These standards used different encoding requirements for achieving these higher data rates.

100 Mbps Ethernet, also known as Fast Ethernet, can be implemented using twisted-pair copper wire or fiber media. The most popular implementations of 100 Mbps Ethernet are:
• 100BASE-TX using Cat5 or later UTP
• 100BASE-FX using fiber-optic cable

Because the higher frequency signals used in Fast Ethernet are more susceptible to noise, two separate encoding steps are used by 100-Mbps Ethernet to enhance signal integrity.
What type of cable does 100BASE-TX use?
100BASE-TX uses Cat5 or later UTP
What type of cable does 100BASE-FX use?
100BASE-FX uses fiber-optic cable
100BASE-TX
100BASE-TX was designed to support transmission over either two pairs of Category 5 UTP copper wire or two strands of optical fiber. The 100BASE-TX implementation uses the same two pairs and pinouts of UTP as 10BASE-T. However, 100BASE-TX requires Category 5 or later UTP. The 4B/5B encoding is used for 100BASE-TX Ethernet.

As with 10BASE-TX, 100Base-TX is connected as a physical star. However, unlike 10BASE-T, 100BASE-TX networks typically use a switch at the center of the star instead of a hub. At about the same time that 100BASE-TX technologies became mainstream, LAN switches were also being widely deployed. These concurrent developments led to their natural combination in the design of 100BASE-TX networks.
100BASE-FX
The 100BASE-FX standard uses the same signaling procedure as 100BASE-TX, but over optical fiber media rather than UTP copper. Although the encoding, decoding, and clock recovery procedures are the same for both media, the signal transmission is different - electrical pulses in copper and light pulses in optical fiber. 100BASE-FX uses Low Cost Fiber Interface Connectors (commonly called the duplex SC connector).

Fiber implementations are point-to-point connections, that is, they are used to interconnect two devices. These connections may be between two computers, between a computer and a switch, or between two switches.
1000 Mbps - Gigabit Ethernet
The development of Gigabit Ethernet standards resulted in specifications for UTP copper, single-mode fiber, and multimode fiber. On Gigabit Ethernet networks, bits occur in a fraction of the time that they take on 100 Mbps networks and 10 Mbps networks. With signals occurring in less time, the bits become more susceptible to noise, and therefore timing is critical. The question of performance is based on how fast the network adapter or interface can change voltage levels and how well that voltage change can be detected reliably 100 meters away, at the receiving NIC or interface.

At these higher speeds, encoding and decoding data is more complex. Gigabit Ethernet uses two separate encoding steps. Data transmission is more efficient when codes are used to represent the binary bit stream. Encoding the data enables synchronization, efficient usage of bandwidth, and improved signal-to-noise ratio characteristics.
1000BASE-T Ethernet
1000BASE-T Ethernet provides full-duplex transmission using all four pairs in Category 5 or later UTP cable. Gigabit Ethernet over copper wire enables an increase from 100 Mbps per wire pair to 125 Mbps per wire pair, or 500 Mbps for the four pairs. Each wire pair signals in full duplex, doubling the 500 Mbps to 1000 Mbps.

1000BASE-T uses 4D-PAM5 line encoding to obtain 1 Gbps data throughput. This encoding scheme enables the transmission signals over four wire pairs simultaneously. It translates an 8-bit byte of data into a simultaneous transmission of four code symbols (4D), which are sent over the media, one on each pair, as 5-level Pulse Amplitude Modulated (PAM5) signals. This means that every symbol corresponds to two bits of data. Because the information travels simultaneously across the four paths, the circuitry has to divide frames at the transmitter and reassemble them at the receiver.
1000BASE-SX and 1000BASE-LX
The fiber versions of Gigabit Ethernet - 1000BASE-SX and 1000BASE-LX - offer the following advantages over UTP: noise immunity, small physical size, and increased unrepeated distances and bandwidth.
All 1000BASE-SX and 1000BASE-LX versions support full-duplex binary transmission at 1250 Mbps over two strands of optical fiber.

The transmission coding is based on the 8B/10B encoding scheme. Because of the overhead of this encoding, the data transfer rate is still 1000 Mbps.
Purpose of Switches
Switches allow the segmentation of the LAN into separate collision domains.

Each port of the switch represents a separate collision domain and provides the full media bandwidth to the node or nodes connected on that port. With fewer nodes in each collision domain, there is an increase in the average bandwidth available to each node, and collisions are reduced.
Selective Forwarding

Store and Forward
Ethernet switches selectively forward individual frames from a receiving port to the port where the destination node is connected. This selective forwarding process can be thought of as establishing a momentary point-to-point connection between the transmitting and receiving nodes. The connection is made only long enough to forward a single frame. During this instant, the two nodes have a full bandwidth connection between them and represent a logical point-to-point connection.

A LAN switch will buffer an incoming frame and then forward it to the proper port when that port is idle. This process is referred to as store and forward.

With store and forward switching, the switch receives the entire frame, checks the FSC for errors, and forwards the frame to the appropriate port for the destination node. Because the nodes do not have to wait for the media to be idle, the nodes can send and receive at full media speed without losses due to collisions or the overhead associated with managing collisions.
MAC Table

Switch Table

Bridge Table
The switch maintains a table, called a MAC table. that matches a destination MAC address with the port used to connect to a node. For each incoming frame, the destination MAC address in the frame header is compared to the list of addresses in the MAC table. If a match is found, the port number in the table that is paired with the MAC address is used as the exit port for the frame.

The MAC table can be referred to by many different names. It is often called the switch table. Because switching was derived from an older technology called transparent bridging, the table is sometimes called the bridge table.
MAC Table Aging
The entries in the MAC table acquired by the Learning process are time stamped. This timestamp is used as a means for removing old entries in the MAC table. After an entry in the MAC table is made, a procedure begins a countdown, using the timestamp as the beginning value. After the value reaches 0, the entry in the table will be refreshed when the switch next receives a frame from that node on the same port.
Switch Flooding
If the switch does not know to which port to send a frame because the destination MAC address is not in the MAC table, the switch sends the frame to all ports except the port on which the frame arrived. The process of sending a frame to all segments is known as flooding. The switch does not forward the frame to the port on which it arrived because any destination on that segment will have already received the frame. Flooding is also used for frames sent to the broadcast MAC address.
Selective Forwarding
Selective forwarding is the process of examining a frame's destination MAC address and forwarding it out the appropriate port. This is the central function of the switch. When a frame from a node arrives at the switch for which the switch has already learned the MAC address, this address is matched to an entry in the MAC table and the frame is forwarded to the corresponding port. Instead of flooding the frame to all ports, the switch sends the frame to the destination node via its nominated port. This action is called forwarding.
Frame Filtering
In some cases, a frame is not forwarded. This process is called frame filtering. One use of filtering has already been described: a switch does not forward a frame to the same port on which it arrived. A switch will also drop a corrupt frame. If a frame fails a CRC check, the frame is dropped. An additional reason for filtering a frame is security. A switch has security settings for blocking frames to and/or from selective MAC addresses or specific ports.
What is the purpose of ARP
Resolving IPv4 Addresses to MAC Addresses - For a frame to be placed on the LAN media, it must have a destination MAC address. When a packet is sent to the Data Link layer to be encapsulated into a frame, the node refers to a table in its memory to find the Data Link layer address that is mapped to the destination IPv4 address. This table is called the ARP table or the ARP cache. The ARP table is stored in the RAM of the device.
ARP Map
We call the relationship between the two values a map - it simply means that you can locate an IP address in the table and discover the corresponding MAC address. The ARP table caches the mapping for the devices on the local LAN.
ARP Request
A device can get an address pair by broadcasting an ARP request. ARP sends a Layer 2 broadcast to all devices on the Ethernet LAN. The frame contains an ARP request packet with the IP address of the destination host. The node receiving the frame that identifies the IP address as its own responds by sending an ARP reply packet back to the sender as a unicast frame. This response is then used to make a new entry in the ARP table.
Proxy ARP
There are circumstances under which a host might send an ARP request seeking to map an IPv4 address outside of the range of the local network. To provide a MAC address for these hosts, a router interface may use a proxy ARP to respond on behalf of these remote hosts. This means that the ARP cache of the requesting device will contain the MAC address of the gateway mapped to any IP addresses not on the local network. Using proxy ARP, a router interface acts as if it is the host with the IPv4 address requested by the ARP request. By "faking" its identity, the router accepts responsibility for routing packets to the "real" destination.
ARP spoofing

ARP poisoning
In some cases, the use of ARP can lead to a potential security risk. ARP spoofing, or ARP poisoning, is a technique used by an attacker to inject the wrong MAC address association into a network by issuing fake ARP requests. An attacker forges the MAC address of a device and then frames can be sent to the wrong destination. Manually configuring static ARP associations is one way to prevent ARP spoofing. Authorized MAC addresses can be configured on some network devices to restrict network access to only those devices listed.
Fixed configurations
Networking devices, such as routers and switches, come in both fixed and modular physical configurations. Fixed configurations have a specific number and type of ports or interfaces.
Modular devices
Modular devices have expansion slots that provide the flexibility to add new modules as requirements evolve. Most modular devices come with a basic number of fixed ports as well as expansion slots. Since routers can be used for connecting different numbers and types of networks, care must be taken to select the appropriate modules and interfaces for the specific media.
Patch Cables
We use patch cables to connect individual devices to these wall jacks. Allowed patch cable length depends on the horizontal cable and telecommunication room cable lengths. Recall that the maximum length for these three area can not exceed 100m.
Patch Panels
Inside the telecommunications room, patch cords make connections between the patch panels, where the horizontal cables terminate, and the intermediary devices. Patch cables also interconnect these intermediary devices.
Horizontal Cabling
Horizontal cabling refers to the cables connecting the telecommunication rooms with the work areas. The maximum length for a cable from a termination point in the telecommunication room to the termination at the work area outlet must not exceed 90 meters.
Permanent Link
This 90 meter maximum horizontal cabling distance is referred to as the permanent link because it is installed in the building structure. The horizontal media runs from a patch panel in the telecommunications room to a wall jack in each work area. Connections to the devices are made with patch cables.
Backbone Cabling

Vertical Cabling
Backbone cabling refers to the cabling used to connect the telecommunication rooms to the equipment rooms, where the servers are often located. Backbones, or vertical cabling, are used for aggregated traffic, such as traffic to and from the Internet and access to corporate resources at a remote location. A large portion of the traffic from the various work areas will use the backbone cabling to access resources outside the area or facility. Therefore, backbones typically require high bandwidth media such as fiber-optic cabling.
Backbone (vertical) cabling
Backbones, or vertical cabling, are used for aggregated traffic, such as traffic to and from the Internet and access to corporate resources at a remote location. A large portion of the traffic from the various work areas will use the backbone cabling to access resources outside the area or facility. Therefore, backbones typically require high bandwidth media such as fiber-optic cabling.
Raceways
The ease of cable installation varies according to cable types and building architecture. Access to floor or roof spaces, and the physical size and properties of the cable influence how easily a cable can be installed in various buildings. Cables in buildings are typically installed in raceways.

Raceway is an enclosure or tube that encloses and protects the cable. A raceway also keeps cabling neat and easy to thread.
Electromagnetic Interference (EMI)

Radio Frequency Interference (RFI)
Electromagnetic Interference (EMI) and Radio Frequency Interference (RFI) can be produced by electrical machines, lightning, and other communications devices, including computers and radio equipment. Wireless is the medium most susceptible to RFI.
MDI (media-dependent interface)
The MDI (media-dependent interface) uses the normal Ethernet pinout. Pins 1 and 2 are used for transmitting and pins 3 and 6 are used for receiving. Devices such as computers, servers, or routers will have MDI connections.
MDIX (media-dependent interface, crossover)
The devices that provide LAN connectivity - usually hubs or switches - typically use MDIX (media-dependent interface, crossover) connections. The MDIX connection swaps the transmit pairs internally. This swapping allows the end devices to be connected to the hub or switch using a straight-through cable.
Uses for Straight-Through Cables
Use straight-through cables for the following connections:
• Switch to a router Ethernet port
• Computer to switch
• Computer to hub
Uses for Crossover Cables
Crossover cables directly connect the following devices on a LAN:
• Switch to switch
• Switch to hub
• Hub to hub
• Router to router Ethernet port connection
• Computer to computer
• Computer to a router Ethernet port
Data Communications Equipment (DCE)
Data Communications Equipment (DCE) - A device that supplies the clocking services to another device. Typically, this device is at the WAN access provider end of the link.
Data Terminal Equipment (DTE)
Data Terminal Equipment (DTE) - A device that receives clocking services from another device and adjusts accordingly. Typically, this device is at the WAN customer or user end of the link.
Channel Service Unit/Data Service Unit (CSU/DSU)
If a serial connection is made directly to a service provider or to a device that provides signal clocking such as a channel service unit/data service unit (CSU/DSU), the router is considered to be data terminal equipment (DTE) and will use a DTE serial cable.
Clock Rate
DCEs and DTEs are used in WAN connections. The communication via a WAN connection is maintained by providing a clock rate that is acceptable to both the sending and the receiving device. In most cases, the telco or ISP provides the clocking service that synchronizes the transmitted signal.

By assigning a clock rate to the router, the timing is set. This allows a router to adjust the speed of its communication operations, thereby synchronizing with the devices connected to it.
WAN Interfaces - Serial
Serial WAN interfaces are used for connecting WAN devices to the CSU/DSU. A CSU/DSU is a device used to make the physical connection between data networks and WAN provider's circuits.
To establish communication with a router via a console on a remote WAN, a WAN interface is assigned a Layer 3 address (IPv4 address).
Console Interface
The console interface is the primary interface for initial configuration of a Cisco router or switch. It is also an important means of troubleshooting. It is important to note that with physical access to the router's console interface, an unauthorized person can interrupt or compromise network traffic. Physical security of network devices is extremely important.
Auxiliary (AUX) Interface
This interface is used for remote management of the router. Typically, a modem is connected to the AUX interface for dial-in access. From a security standpoint, enabling the option to connect remotely to a network device carries with it the responsibility of maintaining vigilant device management.
Terminal Emulator
A terminal emulator is a software program that allows one computer to access the functions on another device. It allows a person to use the display and keyboard on one computer to operate another device, as if the keyboard and display were directly connected to the other device. The cable connection between the computer running the terminal emulation program and the device is often made via the serial interface.
HyperTerminal Settings for making a telnet connection
To configure HyperTerminal, confirm the chosen serial port number, and then configure the port with these settings:
Bits per second: 9600 bps
• Data bits: 8
• Parity: None
• Stop bits: 1
• Flow control: None
Cisco Internetwork Operating System (IOS)
The Cisco Internetwork Operating System (IOS) is the system software in Cisco devices. It is the core technology that extends across most of the Cisco product line. The Cisco IOS is used for most Cisco devices regardless of the size and type of the device. It is used for routers, LAN switches, small Wireless Access Points, large routers with dozens of interfaces, and many other devices.
Flash memory
Flash memory provides non-volatile storage. This means that the contents of the memory are not lost when the device loses power. Even though the contents are not lost they can be changed or overwritten if needed.
Public Switched Telephone Network (PSTN)
Public Switched Telephone Network (PSTN) refers to the equipment and devices that telcos use to create basic telephone service between any two phones in the world
Synchronous circuit
The CSU/DSUs on the ends of a leased line create what is called a synchronous circuit, because not only do the CSU/DSUs try to run at the same speed, they adjust their speeds to match or synchronize with the other CSU/DSU
A CSU/DSU terminates a digital local loop.

(true or false)
True.
A CSU/DSU terminates an analog local loop.

(true or false)
False.
A modem terminates an analog local loop.

(true or false)
True.
A modem terminates a digital local loop.

(true or false)
False.
A router is commonly considered a DTE device.

(true or false)
True.