Welcome!

Linux Authors: Jerry Melnick, Elizabeth White, Liz McMillan, Andreas Grabner, Esmeralda Swartz

Related Topics: SOA & WOA, Java, Virtualization, Web 2.0, Big Data Journal, SDN Journal

SOA & WOA: Article

Welcome to the Show of CDN Monitoring: Act 3

Things Going Wrong

In my previous two posts Act 1 - The What and Why and Act2 - The How and How Not I covered the main benefits of CDNs and what type of tools are needed to monitor them. Now, I want to go into some detail of why you need to consider monitoring them in the first place. Let's face it: if things work well enough, why worry about monitoring them? Of course it is great to be able to see all the details, but do I really need all that information? Aren't the CDNs doing a good enough job?

The answer to that last one: Yes, CDNs are doing a very good job ... most of the time.

How can you know the CDN delivers what it promised?
At the end you will have made your decision to go for a CDN based on criteria like "if we invest X in this solution the improvement needs to be at least Y". Typically the X and the expected improvements are fairly easy to define - but when it comes to determining the actual Y delivered, traditional testing or monitoring approaches are used - and as I explained in Act2 - The How and How Not they fail to deliver the right answers.

Back in my theatre days (see Act 1 - The What and Why) we took the risk and chose our marketing flyer distribution agency purely based on budget and gut feeling. Soon our flyers could be found in relevant tourist and culture locations across Berlin and even all the way in Hamburg. They even gave us the option to replace the material with new versions on short notice (e.g., updating the flyers with a note that the first two shows were sold out already). And believe me, we certainly didn't have the bandwidth to pull such a stunt on our own just a few days before the opening night.

Our theatre "CDN" journey ended there - but think of bigger players like Disneyland or "Phantom of the Opera" that spend a fortune on making sure I see their flyers in all the hotels I visit - they sure seem to follow me wherever I travel.

We were not "professional" enough to actually validate that our investment was a good one - or to even systematically check whether they did everything they told us. Again: we were quite busy running the show. But overall we were quite happy.

And now think of your CDN investment.

What does the CDN promise?
Instead of just copying what different vendors publish on their websites or repeating the core benefits explained in ACT1 let me list some basic key technical aspects most people think of:

  • Get the content closer to the end user
  • Cache the content and lower the traffic on my data center
  • Balance the load to deliver a good performance even in peak times
  • Always be available

What could possibly go wrong?
Looking at this simplified view everything sounds fine and with such basic items the risk of failures should be relatively low.

However we very often do see issues undetected by traditional testing/monitoring approaches caused by CDN customer misconfigurations, CDN outages or other irregularities.

And again thinking back - all of these issues exist in the real brochure world as well.

While not complete the list includes:

  • Wrong routes sending the request halfway across the globe instead of to the closest PoP.
    Once I saw a whole stack of Phantom of the Opera flyers in a nice little hotel in Germany - alas advertising the great show in Singapore.
  • Content not cached or compressed the way it should
    Brochures being folded in the wrong format and thus not fitting into the stands is something quite damaging. And sending out the wrong caching headers or screwing up the nice content compression is also not something you would get a lot of applause for.
  • All requests hitting same PoP instead of spreading load
    One time all of our brochures were only placed in one of the hotel lobbies instead of all the lobbies we had paid for in our package. Luckily, a friend pointed it out for us.
  • DNS mishaps
    Misprinting the contact information on where to find the show or how to book tickets resembles a case in which a customer found out that in some of the key markets the DNS entries were wrong resulting in the site not being available in a number of countries.

Example 1: CDN requests misrouted across the globe
Surprisingly, we can see in our data something that happens quite often and across a number of CDN vendors. Instead of routing the request to the closest PoP and thus optimizing latency due to RTT, the request often is routed to some server far away from the actual user. If it were for a single request the impact might be negligible but when most of your resources are served with a latency of say 500ms it quickly adds up and results in a very high total response time.

In these particular examples synthetic tests have been executed using the Compuware APMaaS Last Mile agents from a large number of different locations within Australia, Germany or Italy.

Results show that in many cases PoPs in USA were hit from machines located in Australia and while some of these did offer a very low connection latency, quite a number of them did cause a dramatic slowdown in the overall end-user performance. The table shows the average connect time from the last mile agents to the PoP as a measure of the latency.

1. Avg connection time of 20 most hit CDN PoPs running synthetic last mile tests in Australia

Another example shows that it's not only Australia having such issues. The following map shows results from a test conducted in Germany over 24h. Green dots represent the locations of end-user machines used for the synthetic Last Mile test and the red dots show the locations of the hit CDN PoPs.

2. Request from within Germany are routed to CDN PoPs across the globe

Yet another case found with one of our Italian customers showed that the response times of end users connected with Telecom Italia as their local ISP were 33% above the average of all others. Looking at the network components the biggest difference was the average connection time to the CDN PoPs being used. The end-user machines using Telecom Italia had an average connection time of 298ms while all others connected within 181ms. Drilling into the details we found that most of the CDN PoPs hit by these end-user machines were actually located in the US, which of course explains the drop in performance.

Comparison average CDN PoP connection time Telecom Italia vs other ISPs

CDN PoPs hit from Italy - minimum 100 total connections

Lesson learned: Make sure you know where the content is delivered from and correlate end user performance to increased latency due to mis-routing.

To dive into examples 2-5, click here to read more:

Example 2: Cache hit ratio too low

Example 3: Oversubscription of a CDN PoP

Example 4: DNS misconfiguration

Example 5: Real User Monitoring detects CDN performance peak

Don't live with the risk of not knowing what is going on with your application. Click here to be sure your CDN Monitoring is being handled correctly!

More Stories By Kristian Skoeld

Kristian Skoeld is a Performance Analyst at Compuware APM Center of Excellence. He coaches and supports teams across Europe as a Performance Analyst and Product Specialist in Web Performance Management. He is an expert in optimizing IT processes, develop web strategies and putting them into action, and a subject matter expert on Web Performance and Web Monitoring within the Compuware APM business unit.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.