Welcome!

Linux Containers Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski, Zakia Bouachraoui

Related Topics: Linux Containers

Linux Containers: Article

Tackling Dependencies

Automated dependency resolution

You don't have to be around Linux for long before you hear about "the dependency problem," which is no problem at all for many users - until the day it bites them. In a nutshell, the problem is that most Linux applications depend on the operating system to provide various pieces of functionality that the applications need. These components most often take the form of shared libraries that are dynamically loaded and linked to the application at runtime. Problems occur when one or more of these libraries are replaced with a different (usually newer) version. Provided all the interfaces remain the same and the semantics of the functionality remain the same, there's no problem. However, due to security fixes, bug fixes, or new or improved functionality, the interfaces and semantics can and do change, and the changes can be enough to break the application.

In general, individual libraries aren't distributed individually but as part of a package of related components. Packages are installed as a whole. In general there's no way to take just one part of a package, and in fact to do so would be inviting trouble since packages are usually composed of related components that need to be installed as a whole to guarantee that the components work together correctly.

It would seem reasonable to assume that once an application is installed and working there would be no reason to touch it, or any of the components that it depends on. However, there are myriad reasons why components end up changing. The most common are:

  • A new application is installed that requires a later version of a package containing one of the shared libraries used by the first application.
  • A security or bug fix affects a package used by the application.
  • An update version of the OS from the distributor includes newer packages.
Whatever the cause, installing a package to satisfy the requirements of some totally unrelated piece of software can break a currently installed application, even if it has no libraries in common with the new software, but simply because some of the libraries of the two applications are contained in the same package. The more applications that are installed, the greater the chance of problems, and the more complex it becomes to find a combination of packages that will satisfy all of the dependencies.

This problem isn't unique to Linux. It's been a long-time problem with many different Unix systems that make heavy use of shared libraries, and is essentially the same as the famous "dll hell" that afflicted older versions of Windows (a Windows .dll file is a dynamically loaded shared library). Unix vendors reduced the problem to manageable proportions by tightly controlling the evolution of the system and providing lots of advance warning to third-party vendors of impending changes. Microsoft adopted similar tactics and became adamant about which libraries third parties were free to change, and which they weren't, making it difficult for third-party applications to overwrite system-provided libraries.

Linux is essentially no better or worse in terms of shared library management than Unix or Windows. What's different is that there's no central coordinating authority to make sure that changes happen in a controlled and consistent manner. In some respects this is one of the strengths of Open Source development; it allows change to happen at its natural pace and forces people to be more aware of the potential problems associated with change. It's also a weakness from the point-of-view of an end user having to integrate systems with components with different dependency requirements.

Linux distributors spend a lot of time and effort making sure that their systems are delivered with the dependencies all correctly resolved, and that any updates they create don't disturb this balance. They often go so far as to take security/bug/performance updates that tend to be created by the component development group on the latest version of their software - which is probably not the version currently shipping - and back-port the changes to the shipping version and test to make sure the changes don't introduce any inconsistencies. However, distributions have limited control over third parties, both commercial and Open-Source, that are reluctant to re-test their products with every new version of a package on which they have a dependency.

One of the interesting benefits Linux holds for its enterprise adopters is that it's multi-sourced. Essentially the same product is available from multiple vendors, improving competition and avoiding lock-in. Although managing multiple Linux distributions adds overhead, some companies prefer not to put all their eggs in one basket and use multiple distributions to help ensure that they don't end up locked in to a specific distribution.

However, this adds a new dimension to the dependency problem.

In many instances, different packages are used on different distributions. There's no standardized system for packaging, although most major Linux distributors use the RPM packaging system that defines how the package is constructed and what information it contains on individual file location and a set of dependency rules. Unfortunately, the same package name on two different distributions can contain different revisions of components or even different components. To add to the problem, not all Linux systems use the same package mechanism. For example, Debian-based systems don't use packages at all.

The end result is that the configuration management of these systems becomes quite complex. Applying something such as a security fix may necessitate other changes to bring the dependencies back into alignment, and then subsequent testing of all applications running on those revised systems/distributions before they can be declared stable and rolled out into production. This has to happen independently on each distribution platform.

Aduva OnStage

Having lived and worked with this problem, and recognizing the need for a better solution than the mostly manual process that they and everyone else was using, the founders of Aduva began work on trying to automate the dependency resolution process.

One of the key components of their system was based on recognizing that the package level - at which most people were working - is too high a level for successful resolution. They built a database (the Aduva Universal KnowlegeBase) of dependency information based on the contents of the packages - the individual files that they contain.

By extending the dependency information and including specific dynamic dependency rules down to the file level it becomes much easier to find solutions to dependency problems. Once a set of solutions is found, information stored in the database about the composition of the packages is used to resolve the set of packages with the highest version levels possible to implement a solution. Since information is stored on distribution-specific packages, the system can derive package lists specific to individual distributions for the same dependencies.

The database also contains information on security alerts and errata notifications and fixes for packages and their components, so in building a specific package list the system takes account of these, and will find a path through the dependency graph that avoids as many of them as possible, hopefully all of them. Rarely, when re-evaluating dependencies to add a new application, the only valid path(s) will include components with known security issues. In that case the system delivers its package list, but warns that the list will introduce known security problems, leaving system administrators to decide whether to continue or consult with Aduva's Lab and professional services team to devise a solution.

Keeping the database up-to-date is the key to success. Aduva works closely with Linux distributions, many different Open Source development communities, and various security groups to ensure that it has complete and current information. A set of tools automates and tests much of the process of determining dependencies.

The complete KnowledgeBase allows systems configurations to be generated based on combinations of different packages beyond those directly supported by a given Linux distribution, provided those packages are known to the KnowledgeBase. Of course, in real-world deployments a huge variety of different applications and third-party packages are going to be encountered, more than in the centrally maintained Universal KnowledgeBase. To make sure that the extended dependency requirements encompassing these additional components are taken into account when configuring, a set of tools exists that permits individual customers to create their own KnowledgeBase with dependency rules specific to their particular software. This local KnowledgeBase is then used in conjunction with the central KnowledgeBase to ensure that the specific requirements of local software components are taken into account when determining a stable configuration.

With this core technology in place Aduva has used it as a platform on which to build a set of tools designed to simplify Linux configuration management, application deployment, change management, and patch control for the enterprise. This set of management tools is sold under the name OnStage.

The OnStage toolset provides a very complete set of system configuration management tools, enabling machine types to be defined, a configuration generated for that specific set of machines, and automatically deployed at the click of a button. Changes can be made to any given set of machines such as deploying an application, adding a patch, or changing the system configuration, and the set of changes are validated against the local and central KnowledgeBases and automatically updated to ensure dependency rules are met and pushed to the entire set of machines, either immediately, or deferred until a specific time or other criteria are met. Configurations are recorded at each stage making it trivial to back out any change or set of changes should that be required.

What's unique about the OnStage toolset is that it sits on the KnowledgeBase and takes the uncertainty out of making changes to a stable production platform, which is one more step in making Open Source a viable solution for the enterprise.

More Stories By Zev Laderman

Zev Laderman is chairman and CEO of Aduva. He was previously CEO of Tradeum, President of Verticalnet, and was a VP at Oracle. He has an MBA from the Stanford University Graduate School of Business.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
Contextual Analytics of various threat data provides a deeper understanding of a given threat and enables identification of unknown threat vectors. In his session at @ThingsExpo, David Dufour, Head of Security Architecture, IoT, Webroot, Inc., discussed how through the use of Big Data analytics and deep data correlation across different threat types, it is possible to gain a better understanding of where, how and to what level of danger a malicious actor poses to an organization, and to determin...
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
We are seeing a major migration of enterprises applications to the cloud. As cloud and business use of real time applications accelerate, legacy networks are no longer able to architecturally support cloud adoption and deliver the performance and security required by highly distributed enterprises. These outdated solutions have become more costly and complicated to implement, install, manage, and maintain.SD-WAN offers unlimited capabilities for accessing the benefits of the cloud and Internet. ...
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
The Founder of NostaLab and a member of the Google Health Advisory Board, John is a unique combination of strategic thinker, marketer and entrepreneur. His career was built on the "science of advertising" combining strategy, creativity and marketing for industry-leading results. Combined with his ability to communicate complicated scientific concepts in a way that consumers and scientists alike can appreciate, John is a sought-after speaker for conferences on the forefront of healthcare science,...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
DXWorldEXPO LLC announced today that Ed Featherston has been named the "Tech Chair" of "FinTechEXPO - New York Blockchain Event" of CloudEXPO's 10-Year Anniversary Event which will take place on November 12-13, 2018 in New York City. CloudEXPO | DXWorldEXPO New York will present keynotes, general sessions, and more than 20 blockchain sessions by leading FinTech experts.
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Ben Perlmutter, a Sales Engineer with IBM Cloudant, demonstrated techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, faster user e...