How Autonegotiation Works
First, let’s cover what autonegotiation does not do: when autonegotiation is enabled on a port, it does not automatically determine the configuration of the port on the other side of the Ethernet cable and then match it. This is a common misconception that often leads to problems.
Autonegotiation is a protocol and, as with any protocol, it only works if it’s running on both sides of the link. In other words, if one side of a link is running autonegotiation and the other side of the link is not, autonegotiation cannot determine the speed and duplex configuration of the other side. If autonegotiation is running on the other side of the link, the two devices decide together on the best speed and duplex mode. Each interface advertises the speeds and duplex modes at which it can operate, and the best match is selected (higher speeds and full duplex are preferred).
The confusion exists primarily because autonegotiation always seems to work. This is because of a feature called parallel detection, which kicks in when the autonegotiation process fails to find autonegotiation running on the other end of the link. Parallel detection works by sending the signal being received to the local 10Base-T, 100Base-TX, and 100Base-T4 drivers. If any one of these drivers detects the signal, the interface is set to that speed.
Parallel detection determines only the link speed, not the supported duplex modes. This is an important consideration because the common modes of Ethernet have differing levels of duplex support:
10Base-T was originally designed without full-duplex support. Some implementations of 10Base-T support full duplex, but many do not.
100Base-T has long supported full duplex, which has been the preferred method for connecting 100 Mbps links for as long as the technology has existed. However, the default behavior of 100Base-T is usually half duplex, and full-duplex support must be set manually.
Gigabit Ethernet has a much more robust autonegotiation protocol than 10M or 100M Ethernet. Gigabit interfaces should be left to autonegotiate in most situations.
10 Gigabit (10G) connections are generally dependent on fiber transceivers or special copper connections that differ from the RJ-45 connections seen on other Ethernet types. The hardware usually dictates how 10G connects. On a 6500, 10G interfaces usually require XENPAKs, which only run at 10G. On a Nexus 5000 switch, some of the ports are 1G/10G and can be changed with the speed command.
Because of the lack of widespread full-duplex support on 10Base-T and the typical default behavior of 100Base-T, when autonegotiation falls through to the parallel detection phase (which only detects speed), the safest thing for the driver to do is to choose half-duplex mode for the link.
As networks and networking hardware evolve, higher-speed links with more robust negotiation protocols will likely make negotiation problems a thing of the past. That being said, I still see 20-year-old routers in service, so knowing how autonegotiation works will be a valuable skill for years to come.
When Autonegotiation Fails
In a half-duplex environment, the receiving (RX) line is monitored. If a frame is present on the RX link, no frames are sent until the RX line is clear. If a frame is received on the RX line while a frame is being sent on the transmitting (TX) line, a collision occurs. Collisions cause the collision error counter to be incremented—and the sending frame to be retransmitted—after a random back-off delay. This may seem counterintuitive in a modern switched environment, but remember that Ethernet was originally designed to work over a single wire. Switches and twisted pair came along later.
In full-duplex operation, the RX line is not monitored, and the TX line is always considered available. Collisions do not occur in full-duplex mode because the RX and TX lines are completely independent.
When one side of the link is full duplex and the other side is half duplex, a large number of collisions will occur on the half-duplex side. The issue may not be obvious, because a half-duplex interface normally shows collisions, while a full-duplex interface does not. Since full duplex means never having to test for a clear-to-send condition, a full-duplex interface will not record any errors in this situation. The problem should present itself as excessive collisions, but only on the half-duplex side.
Gigabit Ethernet uses a substantially more robust autonegotiation mechanism than the one described in this chapter. Gigabit Ethernet should thus always be set to autonegotiation, unless there is a compelling reason not to do so (such as an interface that will not properly negotiate). Even then, this should be considered a temporary workaround until the misbehaving part can be replaced.
When expanding a network using VLANs, you face the same limitations. If you connect another switch to a port that is configured for VLAN 20, the new switch will be able to forward frames only to or from VLAN 20. If you wanted to connect two switches, each containing four VLANs, you would need four links between the switches: one for each VLAN. A solution to this problem is to deploy trunks between switches. Trunks are links that carry frames for more than one VLAN.
Another way to route between VLANs is commonly known as the router-on-a-stick configuration. Instead of running a link from each VLAN to a router interface, you can run a single trunk from the switch to the router. All the VLANs will then pass over a single link.
Deploying a router on a stick saves a lot of interfaces on both the switch and the router. The downside is that the trunk is only one link, and the total bandwidth available on that link is only 10 Mbps. In contrast, when each VLAN has its own link, each VLAN has 10 Mbps to itself. Also, don’t forget that the router is passing traffic between VLANs, so chances are each frame will be seen twice on the same link—once to get to the router, and once to get back to the destination VLAN.
Jack is connected to VLAN 20 on Switch B, and Diane is connected to VLAN 20 on Switch A. Because there is a trunk connecting these two switches together, assuming the trunk is allowed to carry traffic for all configured VLANs, Jack will be able to communicate with Diane. Notice that the ports to which the trunk is connected are not assigned VLANs. These ports are trunk ports and, as such, do not belong to a single VLAN.
Possible switch port modes related to trunking
In Cisco networks, trunking is a special function that can be assigned to a port, making that port capable of carrying traffic for any or all of the VLANs accessible by a particular switch. Such a port is called a trunk port, in contrast to an access port, which carries traffic only to and from the specific VLAN assigned to it. A trunk port marks frames with special identifying tags (either ISL tags or 802.1Q tags) as they pass between switches, so each frame can be routed to its intended VLAN. An access port does not provide such tags, because the VLAN for it is pre-assigned, and identifying markers are therefore unnecessary.
VTP #VLAN Trunking Protocol
VTP allows VLAN configurations to be managed on a single switch. Those changes are then propagated to every switch in the VTP domain. A VTP domain is a group of connected switches with the same VTP domain string configured. Interconnected switches with differently configured VTP domains will not share VLAN information. A switch can only be in one VTP domain; the VTP domain is null by default. Switches with mismatched VTP domains will not negotiate trunk protocols. If you wish to establish a trunk between switches with mismatched VTP domains, you must have their trunk ports set to mode trunk.
The main idea of VTP is that changes are made on VTP servers. These changes are then propagated to VTP clients, and any other VTP servers in the domain. Switches can be configured manually as VTP servers, VTP clients, or the third possibility, VTP transparent. A VTP transparent switch receives and forwards VTP updates but does not update its configuration to reflect the changes they contain. Some switches default to VTP server, while others default to VTP transparent. VLANs cannot be locally configured on a switch in client mode.
There is actually a fourth state for a VTP switch: off. A switch in VTP mode off will not accept VTP packets, and therefore will not forward them either. This can be handy if you want to stop the forwarding of VTP updates at some point in the network.
SW1 and SW2 are both VTP servers. SW3 is set to VTP transparent, and SW4 is a VTP client. Any changes to the VLAN information on SW1 will be propagated to SW2 and SW4. The changes will be passed through SW3 but will not be acted upon by that switch. Because the switch does not act on VTP updates, its VLANs must be configured manually if users on that switch are to interact with the rest of the network.
When a switch receives a VTP update, the first thing it does is compare the VTP domain name in the update to its own. If the domains are different, the update is ignored. If they are the same, the switch compares the update’s configuration revision number to its own. If the revision number of the update is lower than or equal to the switch’s own revision number, the update is ignored. If the update has a higher revision number, the switch sends an advertisement request. The response to this request is another summary advertisement, followed by subset advertisements. Once it has received the subset advertisements, the switch has all the information necessary to implement the required changes in the VLAN configuration.
When a switch’s VTP domain is null, if it receives a VTP advertisement over a trunk link, it will inherit the VTP domain and VLAN configuration from the switch on the other end of the trunk. This will happen only over manually configured trunks, as DTP negotiations cannot take place unless a VTP domain is configured. Be careful of this behavior, as it can cause serious heartache, nausea, and potential job loss if you’re not (or the person before you wasn’t).
On large or congested networks, VTP can create a problem when excess traffic is sent across trunks needlessly. The switches in the gray box all have ports assigned to VLAN 100, while the rest of the switches do not. With VTP active, all of the switches will have VLAN 100 configured, and as such will receive broadcasts initiated on that VLAN. However, those without ports assigned to VLAN 100 have no use for the broadcasts.
On a busy VLAN, broadcasts can amount to a significant percentage of traffic. In this case, all that traffic is being needlessly sent over the entire network, and is taking up valuable bandwidth on the interswitch trunks.
VTP pruning prevents traffic originating from a particular VLAN from being sent to switches on which that VLAN is not active (i.e., switches that do not have ports connected and configured for that VLAN). With VTP pruning enabled, the VLAN 100 broadcasts will be restricted to switches on which VLAN 100 is actively in use.
VTP pruning must be enabled or disabled throughout the entire VTP domain. Failure to configure VTP pruning properly can result in instability in the network. By default, all VLANs up to VLAN 1001 are eligible for pruning, except VLAN 1, which can never be pruned. VTP does not support the extended VLANs above VLAN 1001, so VLANs higher than 1001 cannot be pruned. If you enable VTP pruning on a VTP server, VTP pruning will automatically be enabled for the entire domain.
Dangers of VTP
Remember that many switches are VTP servers by default. Remember, also, that when a switch participating in VTP receives an update that has a higher revision number than its own configuration’s revision number, the switch will implement the new scheme. In our scenario, the lab’s 3750s had been functioning as a standalone network with the same VTP domain as the regular network. Multiple changes were made to their VLAN configurations, resulting in a high configuration revision number. When these switches, which were VTP servers, were connected to the more stable production network, they automatically sent out updates. Each switch on the main network, including the core 6509s, received an update with a higher revision number than its current configuration. Consequently, they all requested the VLAN configuration from the rogue 3750s and implemented that design.
EtherChannel is the Cisco term for the technology that enables the bonding of up to eight physical Ethernet links into a single logical link. The non-Cisco term used for link aggregation is generally Link Aggregation, or LAG for short.
The default behavior is to assign one of the physical links to each packet that traverses the EtherChannel, based on the packet’s destination MAC address. This means that if one workstation talks to one server over an EtherChannel, only one of the physical links will be used. In fact, all of the traffic destined for that server will traverse a single physical link in the EtherChannel. This means that a single user will only ever get 1 Gbps from the EtherChannel at a time. This behavior can be changed to send each packet over a different physical link, but as you’ll see, there are limits to how well this works for applications like VoIP. The benefit arises when there are multiple destinations, which can each use a different path.
You can change the method the switch uses to determine which path to assign. The default behavior is to use the destination MAC address. However, depending on the version of the software and hardware in use, the options may include:
The source MAC address
The destination MAC address
The source and destination MAC addresses
The source IP address
The destination IP address
The source and destination IP addresses
The source port
The destination port
The source and destination ports
There is another terminology problem that can create many headaches for network administrators. While a group of physical Ethernet links bonded together is called an EtherChannel in Cisco parlance, Unix admins sometimes refer to the same configuration as a trunk. Of course, in the Cisco world the term “trunk” refers to something completely different: a link that labels frames with VLAN information so that multiple VLANs can traverse it. Some modern Unixes sometimes create a bond interface when performing link aggregation, and Windows admins often use the term teaming when combining links.
EtherChannel can negotiate with the device on the other side of the link. Two protocols are supported on Cisco devices. The first is the Link Aggregation Control Protocol (LACP), which is defined in IEEE specification 802.3ad. LACP is used when you’re connecting to non-Cisco devices, such as servers. The other protocol used in negotiating EtherChannel links is the Port Aggregation Control Protocol (PAgP). Since PAgP is Cisco-proprietary, it is used only when you’re connecting two Cisco devices via an EtherChannel. Each protocol supports two modes: a passive mode (auto in PAgP and passive in LACP), and an active mode (desirable in PAgP and active in LACP). Alternatively, you can set the mode to on, thus forcing the creation of the EtherChannel.
The Spanning Tree Protocol (STP) is used to ensure that no Layer-2 loops exist in a LAN. Spanning tree is designed to prevent loops among bridges. A bridge is a device that connects multiple segments within a single collision domain. Switches are considered bridges—hubs are not.
When a switch receives a broadcast, it repeats the broadcast on every port (except the one on which it was received). In a looped environment, the broadcasts are repeated forever. The result is called a broadcast storm, and it will quickly bring a network to a halt. Spanning tree is an automated mechanism used to discover and break loops of this kind.
A useful tool when you’re troubleshooting a broadcast storm is the show processes cpu history command.
Here is the output from the show process cpu history command on switch B, which shows 0–3 percent CPU utilization over the course of the last minute:
The numbers on the left side of the graph are the CPU utilization percentages. The numbers on the bottom are seconds in the past (0 = the time of command execution). The numbers on the top of the graph show the integer values of CPU utilization for that time period on the graph. For example, according to the preceding graph, CPU utilization was normally 0 percent, but increased to 1 percent 5 seconds ago and to 3 percent 20 seconds ago. When the values exceed 10 percent, you’ll see visual peaks in the graph itself.
3550-IOS#sho mac-address-table | include 0030.1904.da60 #Another problem caused by a looped environment is MAC address tables (CAM tables in CatOS) being constantly updated.
Spanning tree elects a root bridge (switch) in the network. The root bridge is the bridge that all other bridges need to reach via the shortest path possible. Spanning tree calculates the cost for each path from each bridge in the network to the root bridge. The path with the lowest cost is kept intact, while all others are broken. Spanning tree breaks paths by putting ports into a blocking state.
Routing and Routers
In a Cisco router, the routing table is called the route information base (RIB).
Administrative distance is a value assigned to every routing protocol. In the event of two protocols reporting the same route, the routing protocol with the lowest administrative distance will win, and its version of the route will be inserted into the RIB.
You can see that RIP has an administrative distance of 120, while OSPF has an administrative distance of 110. This means that even though the RIP route has a better metric in Figure 10-7, the route inserted into the routing table will be the one provided by OSPF.
A tunnel is a means whereby a local device can communicate with a remote device as if the remote device were local as well. There are many types of tunnels. Virtual Private Networks (VPNs) are tunnels. Generic Routing Encapsulation (GRE) creates tunnels. Secure Shell (SSH) is also a form of tunnel, though different from the other two.
Tunnels can encrypt data so that only the other side can see it, as with SSH; or they can make a remote network appear local, as with GRE; or they can do both, as is the case with VPN.
GRE tunnels allow remote networks to appear to be locally connected. GRE offers no encryption, but it does forward broadcasts and multicasts. If you want a routing protocol to establish a neighbor adjacency or exchange routes through a tunnel, you’ll probably need to configure GRE. GRE tunnels are often built within VPN tunnels to take advantage of encryption. GRE is described in RFCs 1701 and 2784.
VPN tunnels also allow remote networks to appear as if they were locally connected. VPN encrypts all information before sending it across the network, but it will not usually forward multicasts and broadcasts. Consequently, GRE tunnels are often built within VPNs to allow routing protocols to function. VPNs are often used for remote access to secure networks.
There are two main types of VPNs: point-to-point and remote access. Point-to-point VPNs offer connectivity between two remote routers, creating a virtual link between them. Remote access VPNs are single-user tunnels between a user and a router, firewall, or VPN concentrator (a specialized VPN-only device).
Remote access VPNs usually require VPN client software to be installed on a personal computer. The client communicates with the VPN device to establish a personal virtual link.
SSH is a client/server application that allows secure connectivity to servers. In practice, it is usually used just like Telnet. The advantage of SSH over Telnet is that it encrypts all data before sending it. While not originally designed to be a tunnel in the sense that VPN or GRE would be considered a tunnel, SSH can be used to access remote devices in addition to the one to which you have connected. While this does not have a direct application on Cisco routers, the concept is similar to that of VPN and GRE tunnels, and thus worth mentioning. I use SSH to access my home network instead of a VPN.
Switches, in the traditional sense, operate at Layer 2 of the OSI stack. The first multilayer switches were called Layer-3 switches because they added the capability to route between VLANs. These days, switches can do just about anything a router can do, including protocol testing and manipulation all the way up to Layer 7. Thus, we now refer to switches that operate above Layer 2 as multilayer switches.
The core benefit of the multilayer switch is the capability to route between VLANs, which is made possible through the addition of virtual interfaces within the switch. These virtual interfaces are tied to VLANs, and are called switched virtual interfaces (SVIs).
Another option for some switches is to change a switch port into a router port—that is, to make a port directly addressable with a Layer-3 protocol such as IP. To do this, you must have a switch that is natively IOS or running in IOS native mode. Nexus 7000 switch ports default to router mode.
The primary benefits of Frame Relay are cost and flexibility. A point-to-point T1 will cost more than a Frame Relay T1 link between two sites, especially if the sites are not in the same geographic location, or LATA. Also, with a point-to-point T1, 1.5 Mbps of dedicated bandwidth must be allocated between each point, regardless of the utilization of that bandwidth. In contrast, a Frame Relay link shares resources; on a Frame Relay link, if bandwidth is not being used in the cloud, other customers can use it.
In this network, the firewall should be configured as follows:
The inside network can initiate connections to any other network, but no other network can initiate connections to it.
The outside network cannot initiate connections to the inside network. The outside network can initiate connections to the DMZ.
The DMZ can initiate connections to the outside network, but not to the inside network. Any other network can initiate connections into the DMZ.
One of the main benefits of this type of design is isolation. Should the email server come under attack and become compromised, the attacker will not have access to the users on the inside network. However, in this design, the attacker will have access to the other servers in the DMZ because they’re on the same physical network. (The servers can be further isolated with Cisco Ethernet switch features such as private VLANs, port ACLs, and VLAN maps.)
Another common DMZ implementation involves connectivity to a third party, such as a vendor or supplier. The following figure shows a simple network where a vendor is connected by a T1 to a router in the DMZ. Examples of vendors might include a credit card processing service or a supplier that allows your users to access its database. Some companies even outsource their email system to a third party, in which case the vendor’s email server may be accessed through such a design.
QoS #Quality of service
Some protocols require that packets arrive in order. Other protocols may be sensitive to packet loss. Let’s take a look at some of these protocols and see how they differ:
TCP includes algorithms that alert the sending station of lost or damaged packets so they can be resent. Because of this, TCP-based applications are generally not sensitive to lost packets; in addition, they tend to be less time-sensitive than UDP-based applications.
UDP does not do any error checking and does not report on lost packets. Because of this, UDP-based applications may be sensitive to packet loss.
HTTP is TCP-based. Generally, HTTP applications are not time-sensitive. When you’re viewing a web page, having to wait longer for an image to load due to a dropped packet is not usually a problem.
FTP is TCP-based. FTP is not a real-time protocol, nor is it time-sensitive. If packets are dropped while you’re downloading a file, it’s usually not a problem to wait the extra time for the packets to be resent.
Telnet and SSH
Telnet and SSH are both TCP-based. While they may appear to be real time, they’re not. When packets are resent, the problem manifests as slow responses while you’re typing. This may be annoying, but no damage is done when packets are dropped and resent.
VoIP is UDP-based for the Real-Time Protocol (RTP) voice stream and TCP-based for the call-control stream. VoIP requires extreme reliability and speed, and cannot tolerate packets being delivered out of order. The use of UDP may seem odd, since UDP is not generally used for reliable packet delivery. VoIP uses UDP to avoid the processing and bandwidth overheads involved in TCP. The speed gained from using UDP is significant. Reliability issues can be resolved with QoS; in fact, VoIP is one of the main reasons that companies deploy QoS.
QoS can also be used for some more interesting applications. For example, you can configure your network so that Telnet and SSH have priority over all other traffic. When a virus hits and you need to telnet to your routers, the Telnet and/or SSH traffic will always get through (assuming you’ve rate-limited the CPUs on said routers). Or, if you’re eager to please the boss, you can prioritize his traffic above everyone else’s (except, of course, your own) so he’ll have a better online experience. While these examples may seem far-fetched, I’ve been asked to do just these sorts of things for customers. In many cases, the Operations department needs better network access than the rest of the company. And once an executive learns about QoS, he may demand that he get “better” treatment on the network. I wouldn’t recommend this, but I’ve seen it happen.
Every IP packet has a field in it called the type of service (TOS) field. Two primary types of IP prioritization are used at Layer 3: IP precedence and differential services (diffserv).
Knowing that a value of 160 in the TOS field equals an IP precedence of 5 can be valuable when you’re looking at packet captures. Because the field is known only as TOS to IP, the packet-capture tool will usually report the TOS value.
The Congested Network
How do you know if your network is congested? Let’s look at our favorite two-building company again:
The answer is a deceptively simple one: we’re looking at the wrong side of the link. Take another look at 4 and 5:
5 minute input rate 1509000 bits/sec, 258 packets/sec
5 minute output rate 259000 bits/sec, 241 packets/sec
The input is almost maxed, not the output! When we drop packets due to congestion, we drop them on the outbound journey. We’re looking at the bottom of the hose here. The only packets that can come out of it are the ones that made it through the funnel! We’ll never see the ones that were dropped on the other side from this point of view.
So, let’s take a look at the other side of the link with the same show interface command:Bldg-A-Rtr#sho int s0/0 Serial0/0 is up, line protocol is up Hardware is PowerQUICC Serial Description: [-- T1 WAN Link --] Internet address is 10.10.10.1/30 MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec, reliability 255/255, txload 250/255, rxload 47/255 Encapsulation PPP, loopback not set Keepalive set (10 sec) LCP Open Open: IPCP, CDPCP Last input 00:00:00, output 00:00:00, output hang never Last clearing of "show interface" counters 3w4d Input queue: 0/75/0 (size/max/drops); Total output drops: 152195125 Queueing strategy: weighted fair Output queue: 63/1000/64/152195113 (size/max total/threshold/drops) Conversations 7/223/256 (active/max active/max total) Reserved Conversations 0/0 (allocated/max allocated) 5 minute input rate 261000 bits/sec, 238 packets/sec 5 minute output rate 1511000 bits/sec, 249 packets/sec 282883104 packets input, 3613739796 bytes, 0 no buffer Received 0 broadcasts, 0 runts, 0 giants, 0 throttles 872 input errors, 472 CRC, 357 frame, 0 overrun, 0 ignored, 12 abort 1326931662 packets output, 2869922208 bytes, 0 underruns 0 output errors, 0 collisions, 1 interface resets 0 output buffer failures, 0 output buffers swapped out 0 carrier transitions DCD=up DSR=up DTR=up RTS=up CTS=up
A quick look at the load stats shows us that we’re looking at the right side now:
reliability 255/255, txload 250/255, rxload 47/255
Now, let’s look at the queues:
Input queue: 0/75/0 (size/max/drops); Total output drops: 152195125
Queueing strategy: weighted fair
Output queue: 63/1000/64/152195113 (size/max total/threshold/drops)
Conversations 7/223/256 (active/max active/max total)
Reserved Conversations 0/0 (allocated/max allocated)
The first thing that pops out at me is that this end of the link is running with WFQ enabled (as it should be). Remember that the other side was FIFO. They don’t need to match, but there’s probably no reason for the other side to be FIFO.
And wow! 152,195,113 packets dropped on the output queue! If we divide that into the total number of packets (152,195,113 / 1,326,931,662 * 100), we get 11.47 percent of all packets dropped. I’d say we’ve found our problem. This link is so saturated that more than 1 out of every 10 packets is being discarded. It’s no wonder users are complaining!
A total of 11.47 percent might not seem like a lot of dropped packets, but remember that VoIP cannot stand to have packets dropped. Even 1 percent would be a problem for VoIP. And for protocols that can recover lost packets, the result is a perceived slowdown. If you’re dropping even 1 percent of your packets over a serial link, you’ve got a congestion problem.
Now that we know what the problem is, we need to monitor the link over time. We want to ensure that all those packets weren’t dropped in the last 10 minutes (though that’s unlikely given the high numbers).
To clear those counters:
Bldg-A-Rtr#clear counters s0/0
Clear “show interface” counters on this interface [confirm]
Q: What’s the difference between bandwidth and speed?
A: Bandwidth is a capacity; speed is a rate. Bandwidth tells you the maximum amount of data that your network can transmit. Speed tells you the rate at which the data can travel. The bandwidth for a CAT-5 cable is 10/100 Base-T. The speed of a CAT-5 cable changes depending on conditions.
Q: What is Base-T?
A: Base-T refers to the different standards for Ethernet transmission rates. The 10 Base-T standard transfers data at 10 megabits per second (Mbps). The 100 Base-T standard transfers data at 100 Mbps. The 1000 Base-T standard transfers data at a massive 1000 Mbps.
Q: What is a crossover cable used for?
A: Suppose you want to connect a laptop to a desktop computer. One way of doing this is to use a switch or a hub to connect the two devices, and another way of doing this would be to use a crossover cable, a cable that can send and receive data on both ends at the same time. A crossover cable is different from a straight-through cable in that a straight-through cable can only send or receive data on one end at a time.
Q: Aren’t packets and frames really the same thing?
A: No. We call data transmitting over Ethernet frames. Inside those frames, in the data field, are packets. Generally frames have to due with the transmission protocol, i.e., Ethernet, ATM, Token Ring, etc. But, as you read more about networking, you will see that there is some confusion on this.
Q: A guy in my office calls packets datagrams. Are they the same?
A: Not really. Packets refer to any data sent in as packets. Whereas datagrams are used to refer to data sent in packets by an unreliable protocol such as UDP or ICMP.
Q: What’s the difference between megabits per second (Mbps) and megabytes per second (MBps)?
A: Megabits per second (Mbps) is a bandwidth rate used in the telecommunications and computer networking field. One megabit equals one million bursts of electrical current (aka binary pulses). Megabytes per second (MBps) is a data transfer rate used in computing. One megabyte equals 1, 048, 576 bytes, and one byte equals 8 binary digits (aka bits).
The order of the wires in an RJ-45 connector conforms to one of two standards. These standards are 568A and 568B.
We can convert to ASCII using hex
Once you learn to use hexadecimal, you realize just how cool it is. Hex and binary make great partners, which simplifies conversions between binary and ASCII. Hex is like a bridge between the weird world of binary and our world (the human, readable world).
Here’s what we do:
- Break the byte in half.
Each half-byte is called a nibble. [Note from Editor: you’re kidding, right?]
- Convert each half into its hexadecimal equivalent.
Because the binary number is broken into halves, the highest number you can get is 15 (which is “F” in hex).
- Concatenate the two numbers.
Concatenate is a programmer’s word that simply means “put them beside each other from left to right.”
- Look the number up in an ASCII table.(you can man ascii to see the full ascii hex character table)
Hubs – Switches – Routers
A hub receives incoming signals and sends them out on all the other ports. When several devices start sending signals, the hub’s incessant repetition creates heavy traffic and collisions. A collision happens when two signals run into one another, creating an error. The sending network device has to back off and wait to send the signal again.
A hub contains no processors, and this means that a hub has no real understanding of network data. It doesn’t understand MAC addresses or frames. It sees an incoming networking signal as a purely electrical signal, and passes it on.
A hub is really just an electrical repeater. It takes whatever signal comes in, and sends it out on all the other ports.
- The source workstation sends a frame.
A frame carries the payload of data and keeps track of the time sent, as well as the MAC address of the source and the MAC address of the target.
- The switch updates its MAC address table with the MAC address and the port it’s on.
Switches maintain MAC address tables. As frames come in, the switch’s knowledge of the traffic gets more descript. The switch matches ports with MAC addresses.
- The switch forwards the frame to its target MAC address using information from its table.
It does this by sending the frame out the port where that MAC address is located as the MAC address table indicates.
Switches avoid collisions by storing and forwarding frames on the intranet. Switches are able to do this by using the MAC address of the frame. Instead of repeating the signal on all ports, it sends it on to the device that needs it.
A switch reads the signal as a frame and uses the frame’s information to send it where it’s supposed to go.
How the router moves data across networks
- The sending device sends an ARP request for the MAC address of its default gateway.
- The router responds with its MAC address.
- The sending device sends its traffic to the router.
- The router sends an ARP request for the device with the correct IP address on a different IP network.
- The receiving device responds with its MAC address.
The router changes the MAC address in the frame and sends the data to the receiving device.
- The source workstation sends a frame to the router.
It sends it to the router since the workstation the traffic is meant for is behind the router.
- The router changes the source MAC address to its MAC address and changes the destination MAC address to the workstation the traffic is meant for.
If network traffic comes from a router, we can only see the router’s MAC address. All the workstations behind that router make up what we call an IP subnet. All a switch needs to look at to get frames to their destination is the MAC address. A router looks at the IP address from the incoming packet and forwards it if it is intended for a workstation located on the other network. Routers have far less network ports because they tend to connect to other routers or to switches. Computers are generally not connected directly to a router.
The switch decides where to send traffic based on the MAC address, whereas the router based on the IP address.
Q: But I have a DSL router at home, and my computer is directly connected to it. What is that all about?
A: Good observation. There are switches that have routing capability and routers that have switched ports. There is not a real clear line between the two devices. It is more about their primary function. Now, in large networks, there are switching routers. These have software that allow them to work as routers on switched ports. They are great to use and make building large sophisticated networks straightforward, but they are very expensive.
Q: So the difference betweeen my home DSL router and an enterprise switching router is the software?
A: The big difference is the hardware horsepower. Your home DSL router probably uses a small embedded processor or microcontroller which does all the processing. Switching routers and heavy duty routers have specialized processors with individual processors on each port. The name of the game is the speed at which is can move packets. Your home DSL router probably has a throughput of about 20 Mbps (Megabits per second), whereas a high end switching router can have a throughput of hundreds of Gbps (Gigabits per second) or more.
Hook Wireshark up to the switch
- Connect your computer to the switch with a serial cable.
You will use this to communicate with the switch.
- Open a terminal program such as Hyperterminal and get to the command prompt of the switch. Type in the commands below.
- Hook up your computer to port 1 on the switch with an Ethernet cable.
You will use this to capture network traffic.
- Startup Wireshark and capture some network traffic.
- Some contents of this article are from book <Head First Networking> and <Network Warrior, 2nd Edition>
- Here’s more about MTU http://www.doxer.org/checking-mtu-or-jumbo-frame-settings-with-ping/