|By Makan Pourzandi, Axelle Apvrille, David Gordon, Vincent Roy||
|December 22, 2003 12:00 AM EST||
This article presents a Linux kernel module capable of verifying digital signatures of ELF binaries before running them. This kernel module is available under the GPL license at http://sourceforge.net/projects/disec, and has been successfully tested for kernel 2.5.66 and above.
Why Check the Signature of Your Binaries Before Running Them?
The problem with blindly running executables is that you are never sure they actually do what you think they are supposed to do (and nothing more). Viruses spread so much on Microsoft Windows systems mainly because users are frantic to execute whatever they receive, especially if the title is appealing. The LoveLetter virus, with over 2.5 million machines infected, is a famous illustration of this. Yet Linux is unfortunately not immune to malicious code either. By executing unknown and untrusted code, users are exposed to a wide range of Unix worms, viruses, trojans, backdoors, and so on. To prevent this, a possible solution is to digitally sign binaries you trust, and have the system check their digital signature before running them: if the signature cannot be verified, the binary is declared corrupt and the operating system will not let it run.
There have already been several initiatives in this domain, such as Tripwire, BSign, Cryptomark, and IBM's Signed Executables, but we believe the DigSig project is the first to be both easily accessible to all (available on Sourceforge, under the GPL license) and to operate at the kernel level (see Table 1).
The DigSig Solution
To avoid reinventing the wheel, we based our solution on the existing open source project BSign, a Debian userspace binary signing package. BSign signs the binaries and embeds the signature in the binary itself. Then, at kernel level, DigSig verifies these signatures at execution time and denies execution if the signature is invalid.
Typically, in our approach, binaries are not signed by vendors, rather we hand over control of the system to the local administrator, who is responsible for signing all binaries he or she trusts with his or her private key. Then, those binaries are verified with the corresponding public key. This means you can still use your favorite (signed) binaries: no change in habits. Basically, DigSig guarantees only two things: (1) if you signed a binary, nobody other than you can modify that binary without being detected, and (2) nobody can run a binary that is not signed or is badly signed. Of course, you should be careful not to sign untrusted code: if malicious code is signed, all security benefits are lost.
How Do I Use DigSig?
DigSig is fairly simple to use. First, you need to sign all binaries you trust with BSign (version 0.4.5 or higher). Then you need to load DigSig with the public key that corresponds to the private key used to sign the binaries.
The following shows step by step how to sign the executable "ps":
$ cp 'which ps' ps-test
$ bsign -s ps-test // Sign the binary
$ bsign -V ps-test // Verify the validity of the signature
Next, install the DigSig kernel module. To do so, a recent kernel version is required (2.5.66 or higher), compiled with security options enabled (CONFIG_SECURITY=y). To compile DigSig, assuming your kernel source directory is /usr/src/linux-2.5.66, you do:
$ cd digsig
$ make -C /usr/src/linux-2.5.66 SUBDIRS=$PWD modules
$ cd digsig/tools && make
This builds the DigSig kernel module (digsig_verif.ko), and you're probably already halfway through the command to load it, but wait! If you are not cautious about the following point, you might secure your machine so well you'll basically freeze it. As a matter of fact, once DigSig is loaded, verification of binary signatures is activated. At that time, binaries will be able to run only if their signature is successfully verified. In all other cases (invalid signature, corrupted file, no signature...), execution of the binary will be denied. Consequently, if you forget to sign an essential binary such as /sbin/reboot, or /sbin/rmmod, you'll be most embarrassed to reboot the system if you have to. Therefore, for testing purposes, we recommend you initially run DigSig in debug mode. To do this, compile DigSig with the DSI_DIGSIG_DEBUG and DSI_DEBUG flags set in the Makefile:
EXTRA_CFLAGS += -DDSI_DEBUG -DDSI_DIGSIG_DEBUG -I $(obj)
In debug mode, DigSig lets unsigned binaries run. This state is ideal to test DigSig, and also list the binaries you need to sign to get a fully operational system.
Once this precaution has been taken it's time to load the DigSig module, with your public key as argument. BSign uses GnuPG keys to sign binaries, so retrieve your public key as follows:
$ gpg --export >> my_public_key.pub
Then log as root, and use the digsig.init script to load the module.
# ./digsig.init start my_public_key.pub
Testing if sysfs is mounted in /sys.
Loading Digsig module.
Loading public key.
This is it: signature verification is activated. You can check the signed ps executable (ps-test) works:
# tail -f /var/log/messages
Sep 16 15:49:16 colby kernel: DSI-LSM MODULE - binary is ./ps-test
Sep 16 15:49:16 colby kernel: DSI-LSM MODULE - dsi_bprm_compute_creds: Found signature
Sep 16 15:49:16 colby kernel: DSI-LSM MODULE - dsi_bprm_compute_creds: Signature
But, corrupted executables won't run:
bash: ./ps-corrupt: Operation not permitted
Sep 16 15:55:20 colby kernel: DSI-LSM MODULE - binary is ./ps-corrupt
Sep 16 15:55:20 colby kernel: DSI-LSM MODULE Error - dsi_bprm_compute_creds: Signatures
do not match for ./ps-corrupt
If the permissive debug mode is set, signature verification is skipped for unsigned binaries. Otherwise, the control is strictly enforced in the normal behavior:
bash: ./ps: cannot execute binary file
# tail -f /var/log/messages
Sep 16 16:05:10 colby kernel: DSI-LSM MODULE - binary is ./ps
Sep 16 16:05:10 colby kernel: DSI-LSM MODULE - dsi_bprm_compute_creds:
Signatures do not match
DigSig, Behind the Scenes
The core of DigSig lies in the LSM hooks placed in the kernel's routines for executing a binary. The starting point of any binary execution is a system call to sys_exec(), which triggers do_execve(). This is the transition between user space and kernel space.
The first LSM hook to be called is bprm_alloc_security, where a security structure is optionally attached to the linux_bprm structure that represents the task. DigSig does not use this hook as it doesn't need any specific security structure.
Then, the kernel tries to find a binary handler (search_binary_handler) to load the file. This is when the LSM hook bprm_check_security is called, and precisely when DigSig performs signature verification of the binary. If successful, load_elf_binary() gets called, which eventually calls do_mmap(), then the LSM hook file_mmap(), and finally bprm_free_security().
So, this is how DigSig enforces binary signature verification at kernel level. Now, a brief explanation of the signing mechanism of DigSig's userland counterpart: BSign. When signing an ELF binary, BSign stores the signature in a new section in the binary. To do so, it modifies the ELF's section header table to account for this new section, with the name "signature" and a user defined type 0x80736967 (which comes from the ASCII characters "s", "i", and "g"). You can check your binary's section header table with the command readelf -S <binary>. It then performs a SHA1 hash on the entire file, after having zeroed the additional signature section. Next it prefixes this hash with "#1; bsign v%s" where %s is the version number of BSign, and stores the result at the begining of the binary's signature section. Finally, BSign calls GnuPG to sign the signature section (containing the hash), and stores the signature at the current position of the signature section. A short compatibility note: GnuPG adds a 32-byte timestamp and a signature class identifier in the buffer it signs.
On a cryptographic point of view, DigSig needs to verify BSign's signatures, i.e., RSA signatures. More precisely, this consists in, on one side, hashing the binary with a one-way function (SHA-1) and padding the result (EMSA PKCS1 v1.5), and, on the other side, "decrypting" the signature with the public key and verifying this corresponds to the padded text.
PKCS#1 padding is pretty simple to implement, so we had no problems coding it. Concerning SHA-1 hashing, we used Linux's kernel CryptoAPI:
- We allocate a crypto_tfm structure (crypto_alloc_tfm), and use it to initialize the hashing process (crypto_digest_init).
- Then we read the binary block by block, and feed it to the hashing routine (crypto_digest_update).
- Finally, we retrieve the hash (crypto_digest_final).
- Only the RSA signature verification routines have been kept. For instance, functions to generate large primes have been erased.
- Allocations on the stack have been limited to the strict minimum.
We have performed two different kinds of benchmarks for DigSig: a benchmark of the real impact of DigSig for users (how much they feel the system is slowed down), and a more precise benchmark evaluating the exact overhead induced by our kernel module.
The first set of benchmarks has been performed by comparing how long it takes to run an executable with or without DigSig. To do so, we used the command "time" over fast to longer executions. The following benchmark has been run 20 times:
% time /bin/ls -Al # times /bin/ls
% time ./digsig.init compile # times compilation with gcc
% time tar jxvfp linux-2.6.0-test8.tar.bz2 # times tar
On a Pentium 4, 2.2GHz, with 512MB of RAM, with DigSig using GnuPG's math library, we obtained the results displayed in Table 2. They clearly show that the impact of DigSig is quite important for short executions (such as ls) but soon becomes completely negligible for longer executions such as compiling a project with gcc, or untarring sources with tar.
Second, we measured the exact overhead introduced by our kernel module. To do so, we basically compared jiffies at the beginning and at the end of bprm_check_security. In brief, jiffies represent the number of clock ticks since the system has booted, so they are a precise way to measure time in the Linux kernel. In our case, jiffies are in milliseconds. We have run each binary 30 times (see Table 3) for DigSig compiled with GnuPG.
The results show that, naturally, the digital signature verification overhead increases with the executable's size (which is not a surprise because it takes longer to hash all data).
Finally, to assist us in optimizing our code, we have run Oprofile, a system profiler for Linux, over DigSig (see Table 4). Results clearly indicate that the modular exponentiation routines are the most expensive, so this is where we should concentrate our optimization efforts for future releases. More particularly, we plan to port ASM code of math libraries to the kernel, instead of using pure C code.
Conclusion and Future Work
We've shown how DigSig can help you in mitigating the risk of running malicious code. Our future work will focus on two main areas: performance and features.
Obviously, as signature verification overhead impacts all binaries, it is important to optimize it. There are several paths we might follow such as caching signature verification, sporadically verifying signatures, or optimizing math libraries.
From a feature point of view, we recently implemented digital signature verification of shared libraries: if malicious code is inserted into a library, all executables (even signed ones) that link to this library are compromised, which is a severe limitation. This implementation is currently in the testing phase and will be released soon.
|jackie113 07/11/07 05:07:35 AM EDT|
PlayStation 3 is not only an expensive game console but also an excellent video player. It could play high-def Blu-ray movies in addition to standard DVDs with a Blu-ray drive
For IoT to grow as quickly as analyst firms’ project, a lot is going to fall on developers to quickly bring applications to market. But the lack of a standard development platform threatens to slow growth and make application development more time consuming and costly, much like we’ve seen in the mobile space. In his session at @ThingsExpo, Mike Weiner, Product Manager of the Omega DevCloud with KORE Telematics Inc., discussed the evolving requirements for developers as IoT matures and conducted a live demonstration of how quickly application development can happen when the need to comply wit...
Jul. 31, 2015 08:45 AM EDT Reads: 305
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Architect for the Internet of Things and Intelligent Systems, described how to revolutionize your archit...
Jul. 30, 2015 07:30 PM EDT Reads: 1,401
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Jul. 30, 2015 03:45 PM EDT Reads: 453
MuleSoft has announced the findings of its 2015 Connectivity Benchmark Report on the adoption and business impact of APIs. The findings suggest traditional businesses are quickly evolving into "composable enterprises" built out of hundreds of connected software services, applications and devices. Most are embracing the Internet of Things (IoT) and microservices technologies like Docker. A majority are integrating wearables, like smart watches, and more than half plan to generate revenue with APIs within the next year.
Jul. 30, 2015 02:30 PM EDT Reads: 107
The Internet of Everything (IoE) brings together people, process, data and things to make networked connections more relevant and valuable than ever before – transforming information into knowledge and knowledge into wisdom. IoE creates new capabilities, richer experiences, and unprecedented opportunities to improve business and government operations, decision making and mission support capabilities.
Jul. 30, 2015 01:45 PM EDT Reads: 272
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Opening Keynote at 16th Cloud Expo, Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, d...
Jul. 30, 2015 12:00 PM EDT Reads: 2,058
In his keynote at 16th Cloud Expo, Rodney Rogers, CEO of Virtustream, discussed the evolution of the company from inception to its recent acquisition by EMC – including personal insights, lessons learned (and some WTF moments) along the way. Learn how Virtustream’s unique approach of combining the economics and elasticity of the consumer cloud model with proper performance, application automation and security into a platform became a breakout success with enterprise customers and a natural fit for the EMC Federation.
Jul. 30, 2015 09:00 AM EDT Reads: 2,160
The Internet of Things is not only adding billions of sensors and billions of terabytes to the Internet. It is also forcing a fundamental change in the way we envision Information Technology. For the first time, more data is being created by devices at the edge of the Internet rather than from centralized systems. What does this mean for today's IT professional? In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists addressed this very serious issue of profound change in the industry.
Jul. 29, 2015 03:00 PM EDT Reads: 1,272
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect their organization.
Jul. 29, 2015 02:00 PM EDT Reads: 1,185
It is one thing to build single industrial IoT applications, but what will it take to build the Smart Cities and truly society-changing applications of the future? The technology won’t be the problem, it will be the number of parties that need to work together and be aligned in their motivation to succeed. In his session at @ThingsExpo, Jason Mondanaro, Director, Product Management at Metanga, discussed how you can plan to cooperate, partner, and form lasting all-star teams to change the world and it starts with business models and monetization strategies.
Jul. 28, 2015 04:30 PM EDT Reads: 1,763
Converging digital disruptions is creating a major sea change - Cisco calls this the Internet of Everything (IoE). IoE is the network connection of People, Process, Data and Things, fueled by Cloud, Mobile, Social, Analytics and Security, and it represents a $19Trillion value-at-stake over the next 10 years. In her keynote at @ThingsExpo, Manjula Talreja, VP of Cisco Consulting Services, discussed IoE and the enormous opportunities it provides to public and private firms alike. She will share what businesses must do to thrive in the IoE economy, citing examples from several industry sectors.
Jul. 28, 2015 11:00 AM EDT Reads: 2,042
There will be 150 billion connected devices by 2020. New digital businesses have already disrupted value chains across every industry. APIs are at the center of the digital business. You need to understand what assets you have that can be exposed digitally, what their digital value chain is, and how to create an effective business model around that value chain to compete in this economy. No enterprise can be complacent and not engage in the digital economy. Learn how to be the disruptor and not the disruptee.
Jul. 27, 2015 10:00 AM EDT Reads: 2,029
Akana has released Envision, an enhanced API analytics platform that helps enterprises mine critical insights across their digital eco-systems, understand their customers and partners and offer value-added personalized services. “In today’s digital economy, data-driven insights are proving to be a key differentiator for businesses. Understanding the data that is being tunneled through their APIs and how it can be used to optimize their business and operations is of paramount importance,” said Alistair Farquharson, CTO of Akana.
Jul. 27, 2015 09:00 AM EDT Reads: 325
Business as usual for IT is evolving into a "Make or Buy" decision on a service-by-service conversation with input from the LOBs. How does your organization move forward with cloud? In his general session at 16th Cloud Expo, Paul Maravei, Regional Sales Manager, Hybrid Cloud and Managed Services at Cisco, discusses how Cisco and its partners offer a market-leading portfolio and ecosystem of cloud infrastructure and application services that allow you to uniquely and securely combine cloud business applications and services across multiple cloud delivery models.
Jul. 27, 2015 08:00 AM EDT Reads: 1,903
The enterprise market will drive IoT device adoption over the next five years. In his session at @ThingsExpo, John Greenough, an analyst at BI Intelligence, division of Business Insider, analyzed how companies will adopt IoT products and the associated cost of adopting those products. John Greenough is the lead analyst covering the Internet of Things for BI Intelligence- Business Insider’s paid research service. Numerous IoT companies have cited his analysis of the IoT. Prior to joining BI Intelligence, he worked analyzing bank technology for Corporate Insight and The Clearing House Payment...
Jul. 26, 2015 09:00 PM EDT Reads: 1,573
"Optimal Design is a technology integration and product development firm that specializes in connecting devices to the cloud," stated Joe Wascow, Co-Founder & CMO of Optimal Design, in this SYS-CON.tv interview at @ThingsExpo, held June 9-11, 2015, at the Javits Center in New York City.
Jul. 25, 2015 02:00 PM EDT Reads: 391
SYS-CON Events announced today that CommVault has been named “Bronze Sponsor” of SYS-CON's 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. A singular vision – a belief in a better way to address current and future data management needs – guides CommVault in the development of Singular Information Management® solutions for high-performance data protection, universal availability and simplified management of data on complex storage networks. CommVault's exclusive single-platform architecture gives companies unp...
Jul. 25, 2015 01:00 PM EDT Reads: 1,957
Electric Cloud and Arynga have announced a product integration partnership that will bring Continuous Delivery solutions to the automotive Internet-of-Things (IoT) market. The joint solution will help automotive manufacturers, OEMs and system integrators adopt DevOps automation and Continuous Delivery practices that reduce software build and release cycle times within the complex and specific parameters of embedded and IoT software systems.
Jul. 25, 2015 12:15 PM EDT Reads: 462
"ciqada is a combined platform of hardware modules and server products that lets people take their existing devices or new devices and lets them be accessible over the Internet for their users," noted Geoff Engelstein of ciqada, a division of Mars International, in this SYS-CON.tv interview at @ThingsExpo, held June 9-11, 2015, at the Javits Center in New York City.
Jul. 25, 2015 12:00 PM EDT Reads: 1,536
Internet of Things is moving from being a hype to a reality. Experts estimate that internet connected cars will grow to 152 million, while over 100 million internet connected wireless light bulbs and lamps will be operational by 2020. These and many other intriguing statistics highlight the importance of Internet powered devices and how market penetration is going to multiply many times over in the next few years.
Jul. 25, 2015 09:00 AM EDT Reads: 1,488