Linux Containers Authors: Sematext Blog, Stackify Blog, Liz McMillan, Yeshim Deniz, Elizabeth White

Related Topics: Linux Containers

Linux Containers: Article

Time for a new installation paradigm, Part 2

Today's package managers fail to make installing & upgrading software easy & error-free

(LinuxWorld) — This is Part 2 in a series calling for a radically new approach to Linux software-installation. Part 1 examined many (though not all) of the problems with the current approaches to software-installation. This time, we'll take a closer look at the technological considerations behind one of the biggest issues for software installation: shared libraries. The best way to solve the problem of shared libraries is to understand why they pose a potential problem and how Linux uses them, so let's explore these issues.

Shared libraries remain the pivotal issue for software-installation. Of course, talent and attention to detail by programmers and library-maintainers can lead to compatibility differences across different versions of shared libraries. However, in general you can expect fewer compatibility issues as the changes to the version numbers move farther to the right. One can usually expect libsomething version 1.x to be incompatible with libsomething version 2.x. You are less likely to experience problems when you move from libsomething 1.3 to libsomething 1.4, and even less likely to have trouble moving from libsomething 1.4.3 to libsomething 1.4.4.

The nightmare: Fun with ldd

Linux is not immune to DLL hell, but the Linux version of DLL hell generally takes a different form than it does on Windows. One usually crosses over into Windows DLL hell when a program is installed that overwrites one DLL file (a Windows shared library) with one that causes problems. This often poses a catch-22 problem: If you restore the old DLL, the new program breaks; if you keep the new DLL, the old program breaks. The Windows API does provide ways to avoid this problem, but few people used them in the past. It is certainly possible to make the same mistake with Linux, but Linux's shared-library maintainers have traditionally been more careful about compatibility issues. Thus, fewer problems arise — even when one overwrites a widely used shared-library with a newer version.

A reason why one doesn't typically overwrite a Linux library with a later version is that shared libraries on Linux are generally installed with filenames that represent the versions of the libraries, such as libsomething.so.6.0.3. Usually, this is of practical consequence only when you move from one major version of a library to another. Normally, you don't have both libsomething.so.6.0.3 and libsomething.6.2.5 on the same system in the same directory — and if you do, one of them is likely to be ignored. Programs rarely load a library by that specific a version. They tend to load libsomething.so.6, which exists as a symbolic link to the latest version, which in this case would be libsomething.so.6.2.5.

You can use a GNU utility called ldd to shed some light on how this all plays out in practice (WARNING: Although highly unlikely, it is possible for programs to exploit security holes in the ldd program, so use it at your own risk).

The ldd program prints information about an application's shared-library dependencies and what the program will use when you run it. For example, when I type ldd /usr/bin/mutt, the output looks something like this:

libncurses.so.5 => /lib/libncurses.so.5 (0x419d4000)
libsasl.so.7 => /usr/lib/libsasl.so.7 (0x4001c000)
libdb-4.0.so => /usr/lib/libdb-4.0.so (0x44b02000)
libdl.so.2 => /lib/libdl.so.2 (0x41126000)
libidn.so.9 => /usr/lib/libidn.so.9 (0x40027000)
libc.so.6 => /lib/libc.so.6 (0x41014000)
libdb2.so.2 => /lib/libdb2.so.2 (0x4312b000)
libcrypt.so.1 => /lib/libcrypt.so.1 (0x41a15000)
libpam.so.0 => /lib/libpam.so.0 (0x41a61000)
/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x41000000)

This tells you that, given your current system-configuration and how the program mutt was compiled and built, the program mutt will find and load the libraries listed above.

Let's look at one of the libraries in the above list. When I look at libdb2.so.2 on my system, I can see that this particular library is not a file, but a symbolic link to the file libdb2.so.2.7.7. Make a mental note of this, because it will play an important part later in our search for a solution.

What happens if we change the name of this symbolic link? For reasons that will become clear in a moment, let's assume that you have only one file or symbolic link called libdb2.so.2 on your system, and that this particular file resides in the /lib directory. Let's look at what happens when we change to the /lib directory and delete the symbolic link or rename it from libdb2.so.2 to something else, say, libdb2.so.2.old. When we run ldd /usr/bin/mutt again, we should see the following change in the output line for this library:

libdb2.so.2 => not found

The application can no longer find the library, even though it still exists as the file libdb2.so.2.7.7. If you've been trying this out for yourself, do not rename the symbolic link back to libdb2.so.2. Instead, run another library tool called ldconfig. This program examines /lib, /usr/lib, and any other library paths listed in the configuration file /etc/ld.so.conf, and (among other things) creates standard symbolic links for the libraries it finds. If your system is configured like mine, then ldconfig will recreate the symbolic link /lib/libdb2.so.2. (You can delete the /lib/libdb2.so.2.old symbolic link now.)

If you're thinking way ahead, you might make the mistake of assuming that Linux expects all file names for shared libraries to have the major version at the end of the file name. However, if you look carefully at the first example of ldd output, you'll see one exception in the list, libdb-4.0.so. In other words, one cannot count on this particular rule of thumb.

Now let's move the libraries and their symbolic links to the directory /usr/lib and then run ldd /usr/bin/mutt again. The line that refers to this library should change to read something like this:

libdb2.so.2 => /usr/lib/libdb2.so.2 (0x4312b000)

Searching the paths

The system follows a built-in library search path, which includes both /lib and /usr/lib, in that order (older loaders actually search these two locations in reverse). So, as ldd demonstrates, mutt will still find the needed library even though we moved it. If we move it to a directory that is not included in the built-in library path, however, then mutt will fail to find the library once again. If you try moving it to a path that is not searched, you'll see the "not found" message appear again in the ldd output.

Now let's assume that you have three versions of libdb2 on your system. One is version 2.7.7, another is version 2.7.6 and the third is version 2.1.8. (Note: I chose the second and third version numbers at random for the sake of example, so please don't e-mail me if no such versions ever existed.)

These libraries reside in the following places, with the corresponding symbolic links:

/lib/libdb2.so.2 -> /lib/libdb2.so.2.7.7
/usr/lib/libdb2.so.2 -> /lib/libdb2.so.2.7.6
/usr/local/lib/libdb2.so.2 -> /libdb2.so.2.1.8

When you run ldd /usr/bin/mutt, you are most likely to see that it loads the library from the /lib directory:

libdb2.so.2 => /lib/libdb2.so.2 (0x4312b000)

You can tell the system to search these library paths in some other order by changing the settings in /etc/ld.so.conf or by setting an environment variable such as LD_LIBRARY_PATH. Please read the sidebar for reasons why messing with LD_LIBRARY_PATH is a bad idea. It is the easiest way to demonstrate one of the principles of how libraries are loaded, however, so I hope you'll excuse the use of LD_LIBRARY_PATH for the purpose of illustration.

export LD_LIBRARY_PATH=/usr/lib:/lib

When you now run ldd /usr/bin/mutt, you should see that it finds the library in /usr/lib instead.

Likewise, if you run the following command, then ldd /usr/bin/mutt would tell you it will load the library from /usr/local/lib:

export LD_LIBRARY_PATH=/usr/local/lib:/usr/lib:/lib

If there are no compatibility problems between versions 2.7.6 and 2.7.7, then it shouldn't matter if mutt finds the library in /lib or /usr/lib. However, if there are any compatibility problems in version 2.1.8, it will make a big difference if mutt tries to load the library from /usr/local/lib. The mutt program may refuse to load, crash during execution or malfunction in minor, unpredictable ways. One might think that it will either malfunction or fail depending on the severity of the compatibility problem, but that is not always true. It often depends on how one uses the dlopen() function, which is the function that loads a shared library. If you write the function one way, the linker will try to resolve undefined symbols in the code only as they are needed. The program may function (more or less) until it hits a symbol that the linker cannot resolve. If you write it another way, the linker will resolve all the undefined symbols immediately and your program will fail if any symbols cannot be resolved.

If it was easy to control or predict how libraries were loaded, the ability to do both might make it easier to install and manage software. As it turns out, it's not easy to predict or control, but it is not impossible, either. Linux has changed the way it handles shared libraries over time, but as far as I can tell, here's the current method the Linux loader searches for shared libraries (in order of priority):

Why is messing with LD_LIBRARY_PATH a bad idea? On the surface, it may seem as if one could use LD_LIBRARY_PATH to solve compatibility problems. Just install all the libraries your applications need, and set it on a per-application basis to search for the correct libraries.

However, the LD_LIBRARY_PATH is subject to various subtle problems. For one thing, the program-loader ignores LD_LIBRARY_PATH if the executable file sets the user ID or group ID when you run it (this is determined by the setuid, setgid properties of the executable file).

You can also run into very confusing situations where you can make an application run properly with LD_LIBRARY_PATH, after which other applications mysteriously break. One possible explanation is that the application you fixed launched other applications, which inherited the custom LD_LIBRARY_PATH setting. Those other applications probably expected the default library search order, not the modified one in LD_LIBRARY_PATH.

The bottom line is that you generally want to solve library path issues some other way than by using LD_LIBRARY_PATH.

  1. If the programmer passes a fully qualified path (the path starts with "/";) to the loader function dlopen(), it loads the library from that path, if it exists.

    Otherwise, the loader searches for the library using the following order of preference:

  2. If set, it will check the contents of the environment variable LD_LIBRARY_PATH
  3. The contents of /etc/ld.so.cache (you generate this with ldconfig and /etc/ld.so.conf;)
  4. The default search path, which is /lib and then /usr/lib.

Given only the above, the following factors can be crucial to proper library handling:

  1. The settings in /etc/ld.so.conf
  2. Whether the ldconfig program was executed recently
  3. Whether the shared libraries load other shared libraries
  4. Whether the main program loads plugins or launches other applications
  5. The setting of the environment variables LD_LIBRARY_PATH, PATH, and others
  6. Specific environment variables for the program(s)
  7. Configuration settings for the program, plugins, and child programs
  8. Configuration settings for the environment (KDE or GNOME configuration, for example)
  9. Whether the program uses setgid or setuid
  10. Whether you have duplicate or conflicting libraries in /lib and /usr/lib
  11. Link-time settings vs. run-time settings (for example, whether one uses the -rpath switch for the linker, which inserts the runtime link path into the executable)
  12. Many other factors...

These are many of the issues we'll have to consider when creating a new installation paradigm, but not nearly all of them. We will lay more of the foundation in the next article. After that, we can begin to pull it all together and assess what it would take to make a radical but positive change in software installation and management.

More Stories By Nicholas Petreley

Nicholas Petreley is a computer consultant and author in Asheville, NC.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

IoT & Smart Cities Stories
The platform combines the strengths of Singtel's extensive, intelligent network capabilities with Microsoft's cloud expertise to create a unique solution that sets new standards for IoT applications," said Mr Diomedes Kastanis, Head of IoT at Singtel. "Our solution provides speed, transparency and flexibility, paving the way for a more pervasive use of IoT to accelerate enterprises' digitalisation efforts. AI-powered intelligent connectivity over Microsoft Azure will be the fastest connected pat...
CloudEXPO has been the M&A capital for Cloud companies for more than a decade with memorable acquisition news stories which came out of CloudEXPO expo floor. DevOpsSUMMIT New York faculty member Greg Bledsoe shared his views on IBM's Red Hat acquisition live from NASDAQ floor. Acquisition news was announced during CloudEXPO New York which took place November 12-13, 2019 in New York City.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
Atmosera delivers modern cloud services that maximize the advantages of cloud-based infrastructures. Offering private, hybrid, and public cloud solutions, Atmosera works closely with customers to engineer, deploy, and operate cloud architectures with advanced services that deliver strategic business outcomes. Atmosera's expertise simplifies the process of cloud transformation and our 20+ years of experience managing complex IT environments provides our customers with the confidence and trust tha...
The graph represents a network of 1,329 Twitter users whose recent tweets contained "#DevOps", or who were replied to or mentioned in those tweets, taken from a data set limited to a maximum of 18,000 tweets. The network was obtained from Twitter on Thursday, 10 January 2019 at 23:50 UTC. The tweets in the network were tweeted over the 7-hour, 6-minute period from Thursday, 10 January 2019 at 16:29 UTC to Thursday, 10 January 2019 at 23:36 UTC. Additional tweets that were mentioned in this...
Today's workforce is trading their cubicles and corporate desktops in favor of an any-location, any-device work style. And as digital natives make up more and more of the modern workforce, the appetite for user-friendly, cloud-based services grows. The center of work is shifting to the user and to the cloud. But managing a proliferation of SaaS, web, and mobile apps running on any number of clouds and devices is unwieldy and increases security risks. Steve Wilson, Citrix Vice President of Cloud,...
Artificial intelligence, machine learning, neural networks. We're in the midst of a wave of excitement around AI such as hasn't been seen for a few decades. But those previous periods of inflated expectations led to troughs of disappointment. This time is (mostly) different. Applications of AI such as predictive analytics are already decreasing costs and improving reliability of industrial machinery. Pattern recognition can equal or exceed the ability of human experts in some domains. It's devel...
The term "digital transformation" (DX) is being used by everyone for just about any company initiative that involves technology, the web, ecommerce, software, or even customer experience. While the term has certainly turned into a buzzword with a lot of hype, the transition to a more connected, digital world is real and comes with real challenges. In his opening keynote, Four Essentials To Become DX Hero Status Now, Jonathan Hoppe, Co-Founder and CTO of Total Uptime Technologies, shared that ...
The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get tailored market studies; and more.
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility. As they do so, IT professionals are also embr...