Welcome!

Linux Containers Authors: Yeshim Deniz, Stefana Muller, Elizabeth White, Zakia Bouachraoui, Pat Romanski

Related Topics: Microservices Expo, Java IoT, Linux Containers, Machine Learning , Agile Computing, @DXWorldExpo

Microservices Expo: Article

Understanding Application Performance on the Network | Part 3

TCP Slow-Start

In Part II, we discussed performance constraints caused by both bandwidth and congestion. Purposely omitted was a discussion about packet loss - which is often an inevitable result of heavy network congestion. I'll use this blog entry on TCP slow-start to introduce the Congestion Window (CWD), which is fundamental for Part IV's in-depth review of Packet Loss.

TCP Slow-Start
TCP uses a slow-start algorithm as it tries to understand the characteristics (bandwidth, latency, congestion) of the path supporting a new TCP connection. In most cases, TCP has no inherent understanding of the characteristics of the network path; it could be a switched connection on a high-speed LAN to a server in the next room, or it could be a low-bandwidth, already congested connection to a server halfway around the globe. In an effort to be a good network citizen, TCP uses a slow-start algorithm based on an internally-maintained congestion window (CWD) which identifies how many packets may be transmitted without being acknowledged; as the data carried in transmitted packets is acknowledged, the window increases. The CWD typically begins at two packets, allowing an initial transmission of two packets and then ramping up quickly as acknowledgements are received.

At the beginning of a new TCP connection, the CWD starts at two packets and increases as acknowledgements are received.

The CWD will continue to increase until one of three conditions is met:

Condition

Determined by

Blog discussion

Receiver's TCP Window limit

Receiver's TCP Window size

Part VII

Congestion detected (via packet loss)

Triple Duplicate ACK

Part IV

Maximum write block size

Application configuration

Part VIII

Generally, TCP slow-start will not be a primary or significant bottleneck. Slow-start occurs once per TCP connection, so for many operations there may be no impact. However, we will address the theoretical case of a TCP slow-start bottleneck, some influencing factors, and then present a real-world case.

The Maximum Segment Size and the CWD
The Maximum Segment Size (MSS) identifies the maximum TCP payload that can be carried by a packet; this value is set as a TCP option as a new connection is established. Probably the most common MSS value is 1460, but smaller sizes may be used to allow for VPN headers or to support different link protocols. Beyond the additional protocol overhead introduced by a reduced MSS, there is also an impact on the CWD, since the algorithm uses packets as its flow control metric.

We can consider the CWD's exchanges of data packets and subsequent ACKs as TCP turns, or TCP round trips; each exchange incurs the round-trip path delay. Therefore, one of the primary factors influencing the impact of TCP slow-start is network latency. A smaller MSS value will result in a larger number of packets - and additional TCP turns - as the sending node increases the CWD to reach its upper limit. It is possible that with a small MSS (536 Bytes) and high path delay (200 msec) that slow-start might introduce 3 seconds of delay to an operation as the CWD increases to a receive window limit of 65KB.

How Important Is TCP Slow-Start?
While significant, even a 3-second delay is probably not interesting for large file transfers, or for applications that reuse TCP connections. But let's consider a simple web page with 20 page elements, averaging about 120KB in size. A misconfigured proxy server prevents persistent TCP connections, so we'll need 20 new TCP connections to load the page. Each connection must ramp up through slow-start as content is downloaded. With a small MSS and/or high latency, each page component will experience a significant slow-start delay.

For more network performance insight from click here for the full article.

More Stories By Gary Kaiser

Gary Kaiser is a Subject Matter Expert in Network Performance Analytics at Dynatrace, responsible for DC RUM’s technical marketing programs. He is a co-inventor of multiple performance analysis features, and continues to champion the value of network performance analytics. He is the author of Network Application Performance Analysis (WalrusInk, 2014).

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
Contextual Analytics of various threat data provides a deeper understanding of a given threat and enables identification of unknown threat vectors. In his session at @ThingsExpo, David Dufour, Head of Security Architecture, IoT, Webroot, Inc., discussed how through the use of Big Data analytics and deep data correlation across different threat types, it is possible to gain a better understanding of where, how and to what level of danger a malicious actor poses to an organization, and to determin...
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.