Welcome!

Linux Containers Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Microservices Expo, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Post

Application Performance Analytics | @DevOpsSummit @Dynatrace #DevOps #APM

How network and application metrics can be derived from a network probe, combined and analyzed to provide insights

Why a discussion around Application Performance Analytics? There's a lot of buzz in this industry around the topic of performance analytics - an informal subset of IT operations analytics (ITOA) - as a solution to the growing mountains of monitoring data and the increasing complexity of application and network architectures.

At the same time, there exist many purpose-built performance analysis solutions. Many are domain-centric - server monitoring and network monitoring, for example - while some exhibit a key ITOA characteristic by incorporating and correlating data from multiple sources. Most perform some level of analysis to expose predefined insights.

Application Performance Analytics: Viewed Through a Simple Framework
In this blog, I'll outline a simple analytics framework that illustrates how network and application metrics can be derived from a network probe ("wire data" to use the increasingly popular term), combined and analyzed to provide insights that are greater than the sum of the parts. (In fact, that is one of the core promises of ITOA.) I'll conclude by pointing out some of the more advanced analysis capabilities this framework might need to become a viable modern-day solution.

First, a bit of a disclaimer. My initial intent was to write about the new release of Dynatrace DC RUM; that was the assigned task. But I know if I started touting features - especially if I used ubiquitous (and usually meaningless) qualifiers such as "exciting, industry-leading, breakthrough, and best-of-breed," you'd probably stop reading. And rightly so; you don't come to this blog for product info, which is readily available here and here. (See what I did there?) Instead, I chose one of DC RUM's focus areas - advanced analytics - as an opportunity to wax technical; you can consider the framework a simplification and abstraction of one of the multiple approaches that DC RUM uses for automated fault domain isolation (FDI).

To keep this blog relatively simple, I'll use the example of a web page - although the framework would apply to any application that uses a request/reply paradigm; to be more universal, I'll switch terms slightly:

  • A transaction is the page load time that the user experiences
  • A hit is a component of the transaction -an image, stylesheet, JavaScript, JavaServer Page, etc.

Application Performance Analytics: Foundation & Key Insights
The Foundation: Hit Performance
Hit-level performance is the basic building block for the framework; it represents the smallest unit of measurement at the application layer, incorporating request and reply message flows as well as server processing delays. The measurement itself is quite straightforward; virtually any AA NPM probe would provide this (it's often referred to as session-layer response time), and the only decode requirement is to identify the TCP ports used. A hit begins with a client request message (PDU) that the client's TCP stack segments into packets for transfer across the network; this is observed by the probe as the request flow. A hit concludes with the server's reply message that is similarly segmented into packets for transfer across the network in the reply flow.

Often, the probe will sit near the application server, not the client, so a small adjustment should be made to the elapsed hit time observed at the server; add ½ of the network round-trip time (RTT) to the beginning and to the end of the measurement to arrive at a more accurate estimate of the performance at the client node. (RTT can be estimated by examining SYN/SYN/ACK handshakes - if they exist - or by more sophisticated ACK timing measurements.)

Timing diagram of a hit measurement

Allocating delays to client, network and server categories starts by calculating the duration of the request and reply flows. At this coarse level, we have a very simple network/server breakdown, but we'll need to apply additional analytics to make it useful. While it's a relatively safe assumption that the delay between the last packet of the client request flow and the first packet of the server reply flow should be allocated to server time, it is not appropriate to assume that the duration of the request and reply flows should be allocated to network time. Instead, when a flow's throughput is low, we should evaluate whether this is caused by the sender or the receiver before we blame the network. For example:

  • The receiver can limit throughput by advertising a TCP window size smaller than the MSS (frequently - and crudely - identified as Win0 events).
  • The sender may be the culprit if it can't deliver packets to the network fast enough; we sometimes refer to this as "sender starved for data."

We can consider the remaining flow duration as network time - still a pretty broad category. To further understand network time, we would want to evaluate for packet loss and retransmission as well as TCP receive window constraints in relationship to the BDP.

A more sophisticated analysis would include tests for additional less-common behaviors such as Nagle, application windowing, and TCP slow start; you can read detailed discussions on these and other network-visible performance bottlenecks by downloading the eBook Network Application Performance Analysis.

Click here for the full article.

More Stories By Gary Kaiser

Gary Kaiser is a Subject Matter Expert in Network Performance Analytics at Dynatrace, responsible for DC RUM’s technical marketing programs. He is a co-inventor of multiple performance analysis features, and continues to champion the value of network performance analytics. He is the author of Network Application Performance Analysis (WalrusInk, 2014).

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
In this Women in Technology Power Panel at 15th Cloud Expo, moderated by Anne Plese, Senior Consultant, Cloud Product Marketing at Verizon Enterprise, Esmeralda Swartz, CMO at MetraTech; Evelyn de Souza, Data Privacy and Compliance Strategy Leader at Cisco Systems; Seema Jethani, Director of Product Management at Basho Technologies; Victoria Livschitz, CEO of Qubell Inc.; Anne Hungate, Senior Director of Software Quality at DIRECTV, discussed what path they took to find their spot within the tec...
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true ...
"Space Monkey by Vivent Smart Home is a product that is a distributed cloud-based edge storage network. Vivent Smart Home, our parent company, is a smart home provider that places a lot of hard drives across homes in North America," explained JT Olds, Director of Engineering, and Brandon Crowfeather, Product Manager, at Vivint Smart Home, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
In an era of historic innovation fueled by unprecedented access to data and technology, the low cost and risk of entering new markets has leveled the playing field for business. Today, any ambitious innovator can easily introduce a new application or product that can reinvent business models and transform the client experience. In their Day 2 Keynote at 19th Cloud Expo, Mercer Rowe, IBM Vice President of Strategic Alliances, and Raejeanne Skillern, Intel Vice President of Data Center Group and G...
DXWorldEXPO LLC announced today that "IoT Now" was named media sponsor of CloudEXPO | DXWorldEXPO 2018 New York, which will take place on November 11-13, 2018 in New York City, NY. IoT Now explores the evolving opportunities and challenges facing CSPs, and it passes on some lessons learned from those who have taken the first steps in next-gen IoT services.
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to...