Welcome!

Linux Authors: Elizabeth White, Sematext Blog, Jayaram Krishnaswamy, Ian Khan, Pat Romanski

Related Topics: SDN Journal, Java, Linux, Virtualization, Cloud Expo, Big Data Journal

SDN Journal: Blog Feed Post

What a Network Engineer Does

Network Engineering workflow can be characterized by overlapping cycles of Activity and Modeling

In a previous article, we talked about “Short T’s.”  We talked about how, in network engineering, the “T” is very long:  Configuring a network to achieve business goals requires considerable skill and knowledge.  While we set up a conceptual model in that post to talk about what “T” means in general terms, we did not discuss in detail how to articulate “T” more specifically for network engineering.  In this post, we’ll explore this in a little more detail.

The NetEng Cycle

Figure 1: The Network Engineering Cycle

Network Engineering workflow can be characterized by overlapping cycles of Activity and Modeling.  In figure 1, I have depicted 4 cycles.  From smallest timescale to largest, these are called:  1. Referential Traversal, 2. Interactive, 3. Design, and 4. Architecture.  The crest of each of these cycles is “Activity” and the trough is “Modeling.”  Modeling on the smaller cycles is simple and correlative, while on the larger cycles it is more abstract and analytical.  Activity on the smaller cycles is characterized by direct interactivity with the network, while on larger scales it is indirect and more design oriented.

As is implied from the diagram, a network engineer will oscillate between activities and modeling.  For instance, in the interactive cycle, they may configure a QoS classification policy, but then immediately issue show commands to see if traffic is being classified appropriately.  Configuring a policy and issuing of show commands are activities, but the show commands start to transition into modeling.  The engineer is attempting to model the immediate effect of the changes they have made.  Based on this modeling of “how things are,” the engineer might start thinking about modifications to the classification policy to bring the operation of the network closer to an expected model of “how things should be.”  As far as it is possible to do so, an attempt might be made to model “how things will be” to check for possible side effects.  The cycle, then, repeats.

Referential Space
However, which show commands should they use to accurately model how the configuration is actually working?  If you were to write down the exact sequence of commands, you might find that the engineer is taking data from the output of the first command and using that as either input into the second command, or as a point of reference while examining output from the second command.  The output from the second command might be, in turn, used similarly when executing a third show command.  This is what is called Referential Traversal.  Referential Traversal is when a network engineer engages in iterative data correlation in support of a workflow.  In the context of a workflow, this data represents that workflow’s state.

Another well known referential traversal is doing a manual packet-walk of the network:  Examining nodes along the way to determine if there is a potential issue along the path between two endpoints on the edge of the network.  Here, the engineer will examine lookup tables, arp entries, and LLDP neighbor information, jumping from one node to the next.  This particular workflow can tangent in tricky ways such as examining when and what configuration changes were made to see if they could impact traffic between those two endpoints.  When tangenting into examination of a device configuration, you enter a different set of correlated data:  A route-map applied to an interface can, in turn, reference access-lists or prefix-lists.  The rules for evaluating packet flow through a policy follows different logic than the general rules for packet flow across a series of devices.

Figure 2: Referential Space

Figure 2: Referential Space

If you take the set of rules, relationships, and data points from “configuration space” and the rules, relationships, and data points from the “forwarding space,” and you combine them with all other such spaces that a network engineer must deal with in the course of their activities, the sum of these is called “referential space” (See Figure 2).  A network engineering workflow will follow some referential path through this space, examining data and following it’s relationships to yet other data.  There are numerous interconnected spaces in the management, control, forwarding, and device planes of a network each with their own logic and types of data. There are more abstract spaces as well, such as a “design” space that contains the rules and relationships that govern network design.  A network engineer’s expertise is measured by how well they can navigate referential space in support of longer time-scale cycles.

Enablement versus Obviation
The challenge of networking, and the reason that automation (and UX/UI for that matter) has not evolved terribly well, is that these referential paths vary greatly based on what the network engineer is trying to do and how a particular network is built.  There is a vast set of rules governing the many relationships that exist between the seemingly infinite array of data types.  The dynamic nature of referential traversal, and the intimidating size of referential space, should justify a healthy skepticism of vendors claiming to encapsulate network complexity or automate network workflows.  More often than not, they are simply moving the complexity around, while making it more difficult to navigate in the process.

It’s long since overdue to move innovation in networking towards enabling network engineers to be more effective instead of trying to obviate them.  Unlike the past, this should happen with a keen understanding of what network engineers actually do and how they think through their activities.  We can augment these activities to reduce time-to-completion, and reduce time-to-insight while at the same reducing risk and increasing accountability.  There are many networking workflows, which after 20 years, are still notoriously difficult and risky to model and complete.  Let’s solve these problems first.

Make Things Better
As a network engineer, how many times have you heard about the glorious wonders of a product that automates networking or encapsulates network complexity in some way?  After 20 years, we have been trained to identify this language as snake-oil, or perhaps a little nicer, “marketing speak.”  When we buy into these products or features, it’s always just a matter of time before they go unused, or the ugly realities of their operation surfaces.

Encapsulating network complexity, or automating network workflows, can’t just be about “faster.”  That’s only part of the problem.  It has to make things “better.”  This can only happen with a deeper understanding of referential space.

The post What a Network Engineer Does appeared first on Plexxi.

Read the original blog entry...

More Stories By Derick Winkworth

Derick Winkworth has been a developer, network engineer, and IT architect in various verticals throughout his career.He is currently a Product Manager at Plexxi, Inc where he focuses on workflow automation and product UX.

@ThingsExpo Stories

ARMONK, N.Y., Nov. 20, 2014 /PRNewswire/ --  IBM (NYSE: IBM) today announced that it is bringing a greater level of control, security and flexibility to cloud-based application development and delivery with a single-tenant version of Bluemix, IBM's platform-as-a-service. The new platform enables developers to build ap...

Building low-cost wearable devices can enhance the quality of our lives. In his session at Internet of @ThingsExpo, Sai Yamanoor, Embedded Software Engineer at Altschool, provided an example of putting together a small keychain within a $50 budget that educates the user about the air quality in their surroundings. He also provided examples such as building a wearable device that provides transit or recreational information. He then reviewed the resources available to build wearable devices at home including open source hardware, the raw materials required and the options available to power s...
The Internet of Things promises to transform businesses (and lives), but navigating the business and technical path to success can be difficult to understand. In his session at @ThingsExpo, Sean Lorenz, Technical Product Manager for Xively at LogMeIn, demonstrated how to approach creating broadly successful connected customer solutions using real world business transformation studies including New England BioLabs and more.
Since 2008 and for the first time in history, more than half of humans live in urban areas, urging cities to become “smart.” Today, cities can leverage the wide availability of smartphones combined with new technologies such as Beacons or NFC to connect their urban furniture and environment to create citizen-first services that improve transportation, way-finding and information delivery. In her session at @ThingsExpo, Laetitia Gazel-Anthoine, CEO of Connecthings, will focus on successful use cases.
The Internet of Things is a misnomer. That implies that everything is on the Internet, and that simply should not be - especially for things that are blurring the line between medical devices that stimulate like a pacemaker and quantified self-sensors like a pedometer or pulse tracker. The mesh of things that we manage must be segmented into zones of trust for sensing data, transmitting data, receiving command and control administrative changes, and peer-to-peer mesh messaging. In his session at @ThingsExpo, Ryan Bagnulo, Solution Architect / Software Engineer at SOA Software, focused on desi...
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With "smart" appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user's habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps, abiding by privacy concerns and making the concept a reality. These challenges can't be addressed w...
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. According to a recent IDG Research Services Survey this rate of traffic will only grow. What's driving t...
"For over 25 years we have been working with a lot of enterprise customers and we have seen how companies create applications. And now that we have moved to cloud computing, mobile, social and the Internet of Things, we see that the market needs a new way of creating applications," stated Jesse Shiah, CEO, President and Co-Founder of AgilePoint Inc., in this SYS-CON.tv interview at 15th Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, data security and privacy.
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focused on understanding how industrial data can create intelligence for industrial operations. Imagine ...
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water, are pursuing SmartGrid initiatives that represent one of the more mature examples of SAE. We have s...
There is no doubt that Big Data is here and getting bigger every day. Building a Big Data infrastructure today is no easy task. There are an enormous number of choices for database engines and technologies. To make things even more challenging, requirements are getting more sophisticated, and the standard paradigm of supporting historical analytics queries is often just one facet of what is needed. As Big Data growth continues, organizations are demanding real-time access to data, allowing immediate and actionable interpretation of events as they happen. Another aspect concerns how to deliver ...
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
The Internet of Things will greatly expand the opportunities for data collection and new business models driven off of that data. In her session at @ThingsExpo, Esmeralda Swartz, CMO of MetraTech, discussed how for this to be effective you not only need to have infrastructure and operational models capable of utilizing this new phenomenon, but increasingly service providers will need to convince a skeptical public to participate. Get ready to show them the money!
The 3rd International Internet of @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that its Call for Papers is now open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world.
"People are a lot more knowledgeable about APIs now. There are two types of people who work with APIs - IT people who want to use APIs for something internal and the product managers who want to do something outside APIs for people to connect to them," explained Roberto Medrano, Executive Vice President at SOA Software, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Media announced that Splunk, a provider of the leading software platform for real-time Operational Intelligence, has launched an ad campaign on Big Data Journal. Splunk software and cloud services enable organizations to search, monitor, analyze and visualize machine-generated big data coming from websites, applications, servers, networks, sensors and mobile devices. The ads focus on delivering ROI - how improved uptime delivered $6M in annual ROI, improving customer operations by mining large volumes of unstructured data, and how data tracking delivers uptime when it matters most.
DevOps Summit 2015 New York, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that it is now accepting Keynote Proposals. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.