Network Bulls
www.networkbulls.com
Best Institute for CCNA CCNP CCSP CCIP CCIE Training in India
M-44, Old Dlf, Sector-14 Gurgaon, Haryana, India
Call: +91-9654672192
IP networks were not originally designed to carry real-time traffic; instead, they were
designed for resiliency and fault tolerance. Each packet is processed separately in an IP
network, sometimes causing different packets in a communications stream to take different
paths to the destination. The different paths in the network may have a different amount of
packet loss, delay, and delay variation (jitter) because of bandwidth, distance, and congestion
differences. The destination must be able to receive packets out of order and resequence
these packets. This challenge is solved by the use of Real-Time Transport Protocol (RTP)
sequence numbers and traffic resequencing. When possible, it is best to not rely solely on
these RTP mechanisms. Proper network design, using Cisco router Cisco Express Forwarding
(CEF) switch cache technology, performs per-destination load sharing by default. Perdestination
load sharing is not a perfect load-balancing paradigm, but it ensures that each
IP flow (voice call) takes the same path.
Bandwidth is shared by multiple users and applications, whereas the amount of bandwidth
required for an individual IP flow varies significantly during short lapses of time. Most data
applications are very bursty, whereas Cisco real-time audio communications with RTP use
the same continuous-bandwidth stream. The bandwidth available for any application,
including CUCM and voice-bearer traffic, is unpredictable. During peak periods, packets
need to be buffered in queues waiting to be processed because of network congestion.
Queuing is a term that anyone who has ever experienced air flight is familiar with. When
you arrive at the airport, you must get in a line (queue), because the number of ticket agents
(bandwidth) available to check you in is less than the flow of traffic arriving at the ticket
counters (incoming IP traffic). If congestion occurs for too long, the queue (packet buffers)
gets filled up, and passengers are annoyed (packets are dropped). Higher queuing delays
and packet drops are more likely on highly loaded, slow-speed links such as WAN links
used between sites in a multisite environment. Quality challenges are common on these
types of links, and you need to handle them by implementing QoS. Without the use of QoS,
voice packets experience delay, jitter, and packet loss, impacting voice quality. It is critical
to properly configure Cisco QoS mechanisms end to end throughout the network for proper
audio and video performance.
During peak periods, packets cannot be sent immediately because of interface congestion.
Instead, the packets are temporarily stored in a queue, waiting to be processed. The amount
of time the packet waits in the queue, called the queuing delay, can vary greatly based on
network conditions and traffic arrival rates. If the queue is full, newly received packets
cannot be buffered anymore and get dropped (tail drop). Figure 1-1 illustrates tail drop.
Packets are processed on a first in, first out (FIFO) model in the hardware queue of all router
interfaces. Voice conversations are predictable and constant (sampling is every 20 milliseconds
by default), but data applications are bursty and greedy. Voice therefore is subject to
degradation of quality because of delay, jitter, and packet loss.
6 Chapter 1: Identifying Issues in a Multisite Deployment
Figure 1-1 Tail Drop
"IP" refers to any type of Internet Protocol (IP) packet
in the output queue for an interface
No comments:
Post a Comment