Welcome!

Linux Containers Authors: Yeshim Deniz, Stefana Muller, Elizabeth White, Zakia Bouachraoui, Pat Romanski

Related Topics: SDN Journal, Java IoT, Microservices Expo, Linux Containers, Containers Expo Blog, Cloud Security

SDN Journal: Blog Post

Converging Your Storage Network Without Fear

The days of completely separate storage network technologies are quickly fading

The days of completely separate storage network technologies are quickly fading. It feels like it’s only a few years ago that Fiber Channel was the way to create large scale storage networks. Big honking storage devices on a separate network, connected to Fiber Channel switches, connected with Fiber Channel adapters into servers. With 10GbE becoming cost effective and matching or outperforming 2, 4 or even 8Gbit Fiber Channel, Fiber Channel over Ethernet was invented, mostly as a mechanism to allow Ethernet based attachments to existing Fiber Channel installations. It’s a bit clumsy, but it works.

A variety of Ethernet and IP based storage mechanisms have gained significant popularity in the past few years. Storage has become denser and cheaper, and advances in file system technology have made it much more convenient to distribute storage and access it as if it was one large system. Most of the customers we find have a mixture of server co-located storage (typically 4-10 TB per system) to mid size storage arrays in the 10s to 100s of TB that are distributed across the network. Whether it’s NAS, iSCSI, ATA over Ethernet or any other Ethernet or IP based storage technology, it is clear that networked storage is where we are going. Don’t get me wrong, FC and FCoE will be around for a long time to come. Billions of dollars have been invested and those dollars will be protected. But even the bellwether FC storage companies provide ethernet and IP based access to their arrays these days.

Interestingly enough though, while many have made or are making the transition to, dare I call it, traditional network technologies for storage transport, many still create separate, parallel ethernet networks for storage. Servers have multiple NICs, with some of them dedicated to the storage network. The reasons for this separation are fairly straightforward. Storage used to be dedicated, it still is, but now on cheap 10GbE. The people that manage the storage still have the same full control, its still a storage network. All of the management, provisioning and tuning tools apply, all that really happened is a change of the medium. But above all, any time I have had a “why don’t you converge your storage and data network” discussion with a customer, it has almost always comes to a fear of running storage traffic and regular data traffic on the same network. It’s a fear of interference, congestion, perhaps a fear of the unknown “what happens to my storage performance when I run the two on the same network?”

The folks at Network Heresy have spent a few articles on the age old network problem of mice and elephant flows. Combining storage and regular data on the same network will amplify this problem. Storage functions like replication and archiving are extremely likely to be elephant flows and can have quite the impact on the rest of the flows. Perhaps that fear is rational?

There are certainly ways to try and engineer your way through this, but they are awfully complicated. I can create a different class of service for storage traffic, queue it differently, give it different drop preferences and use all the other tricks to try and ease the potential for interference. But still, the data traverses the same links and all we have done is made one more important than the other. Which may be perfectly valid for replication and archiving, but not at all for real time access to storage for regular applications.

So what if you could separate your storage and regular data traffic while on the same, single ethernet? What if you could isolate traffic to and from storage to use specific links, while all other traffic would use other links. Give lowest hop count paths to applications and storage that require the lowest latency, give paths with more latency but more available bandwidth to those replication and archiving flows that are not particularly latency sensitive. You can create a single ethernet network, and still separate your storage traffic and regular data traffic. I have seen solutions that propose a virtualized L2 network to separate the two types, but  that only provides logical separation, they still flow across the exact same links, with the potential of interference.

A controller that manages topologies and paths based on end to end visibility and allows you to articulate in fairly abstract terms what you want separated, gives you this option. Having that same controller integrate with the storage management so that this can be fully or partially automated, even better. Think about it, a single storage and data network where you control how converged or diverged the various data streams are is very powerful. We have demonstrated this in the past and there are real solutions based on these capabilities.

We are collectively making the step into network based storage. We should also make the step to run this effectively with maximum performance and minimal interference on a single network. It really can be pretty simple…

[Today's fun fact: Sony was the last major company to stop producing 3.5 inch floppy disks in 2011. In 2009 they still sold 12 million of them. Oh I miss my Commodore VIC-20 with its tape drive.]

The post Converging your storage network without fear appeared first on Plexxi.

Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

IoT & Smart Cities Stories
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...