Linux Containers Authors: Liz McMillan, Jason Bloomberg, Zakia Bouachraoui, Yeshim Deniz, Elizabeth White

Related Topics: @DevOpsSummit, Microservices Expo, Linux Containers, Containers Expo Blog

@DevOpsSummit: Article

Microservices: Operationalization of the Network | @DevOpsSummit [#DevOps]

They should be orchestrated via APIs based on jointly determined (between Dev, Ops, and the Business) thresholds

Why Microservices Are Driving Operationalization of the Network

Microservices. Service-oriented, but not SOA, this architecture is becoming more common as mobility and time to market drives up the ante in this high stakes game of applications.

But just what are microservices? If you want a more complete and lengthy view I highly recommend reading Martin Fowler's series on the topic. If you're looking for a summary, then read on.

Microservices are the result of decomposing applications. That may sound a lot like SOA, but SOA was based on an object-oriented (noun) premise; that is, services were built around an object - like a customer - with all the necessary operations (functions) that go along with it. SOA was also founded on a variety of standards (most of them coming out of OASIS) like SOAP, WSDL, XML and UDDI. Microservices have no standards (at least none deriving from a standards body or organization) and can be based on nouns, like a customer or a product, but just as easily can be based on verbs; that is, functional groups of actions that users take when interacting with an application like "login" or "checkout."

Microservices changes the scalability model of an application. A traditional, monolithic application is typically scaled out, horizontally, using an x-axis scaling pattern. That means as demand increases, additional copies (clones) of the application are added to a pool. A load balancing service provided by a proxy or an application delivery controller (ADC) distribute requests across that pool.

Microservices can still be scaled out horizontally, but across individual services. In other words, microservices breaks down applications into multiple components based on functional or object groupings. Each service is individually scaled, with the result being that the application is scaled more efficiently.That Monday morning rush to log in to the app winds up forcing a scaling event for the "login" service, but not the "logout" or "search" service.

What this means is that operations is prone to isolate each service in its own scalability domain, each with its own dedicated control point (proxy or ADC). The more services, the more control points you need.

There are two ways to do this (well, probably more but for the sake of brevity let's focus on the main two ways). One is to use a very high capacity ADC capable of acting as a central control point for hundreds (or thousands) of services. The other is to distribute that control to each service, essentially building out a tree-like hierarchy of control points at which services can be scaled.

It is the latter that is increasingly popular and the architectural scalability pattern for applications is becoming one that scales the application with a central command and control ADC while scaling out its composite services using service-specific proxies.

This makes sense from the perspective of adjusting application behavior to ensure performance and security of applications based on context (the unique combination of device, network and application) while enabling available and scale of the services that comprise that application. That's because those same services can (and many would argue, should) be used in other applications. Tuning a service for one application no longer makes sense, as tweaking TCP and HTTP options for one application can actually be detrimental to another. By delegating responsibility for app performance and security to the "thing" that virtualizes the app - the ADC - and enabling the "thing" responsible for scaling a specific service - the proxy - the result is a finer grained scaling architecture that is better able to adapt to service-specific and app specific requirements.

All this is well and good, but how does that drive operationalization?

Driving the need to Operationalize

Well, if you consider how many services might need scaling, you can start seeing that it's impractical to manually manage the process. With hundreds (or more) services needing scalability on-demand, manual processes that include launching a new instance of the service and adding it to the proxy pool (and reversing that process when demand diminishes) is simply not feasible.

Operationalization - people collaborating and using programmability to optimize the processes necessary to meet business priorities - takes the approach that these processes, which can be abstracted and encapsulated into a well-defined set of steps, should be orchestrated via programmability (APIs) based on jointly determined (between Dev, Ops, and the Business) thresholds. Those thresholds might be based on performance or capacity or both; the important thing is to ensure they're determined collaboratively to ensure the overall application is able to meet the expectations of the business (and users).

If an application that typically needed one set of services is decomposed into ten different services, then ostensibly you'll need 10 times the services. Which is a lot of services. Services that are, because they're deployed in the network between users and apps, managed by network and operations staff.

load carryThe reality is you can't grow staff at the rate required to handle the load. But it's also true that it isn't the load that breaks you down, it's how you carry it.

Thus, operationalizing the provisioning, configuration, and lifecycle management of those services using programmability (APIs, automation and orchestration frameworks) can dramatically reduce the operational impact on this sudden explosion of services.

As we saw with server virtualization, templating and automated provisioning and management enable server admins to scale from managing fewer servers to averages that would be impossible to match with physical counterparts. The same thing has to happen in the network if IT is going to support the explosion of app services needed to maintain the performance, security and scale of microservices.

That doesn't necessarily mean the network has to go virtual, but it does need to enable the same characteristics as that of its more automated, virtual counterparts in the compute domain. That is, they need to be rapidly provisioned and API-enabled and support templatization to provide the means by which they can be centrally managed through automation and orchestration.

We really need to operationalize all the network things.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

IoT & Smart Cities Stories
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...