|By Makan Pourzandi, Axelle Apvrille, David Gordon, Vincent Roy||
|December 22, 2003 12:00 AM EST||
This article presents a Linux kernel module capable of verifying digital signatures of ELF binaries before running them. This kernel module is available under the GPL license at http://sourceforge.net/projects/disec, and has been successfully tested for kernel 2.5.66 and above.
Why Check the Signature of Your Binaries Before Running Them?
The problem with blindly running executables is that you are never sure they actually do what you think they are supposed to do (and nothing more). Viruses spread so much on Microsoft Windows systems mainly because users are frantic to execute whatever they receive, especially if the title is appealing. The LoveLetter virus, with over 2.5 million machines infected, is a famous illustration of this. Yet Linux is unfortunately not immune to malicious code either. By executing unknown and untrusted code, users are exposed to a wide range of Unix worms, viruses, trojans, backdoors, and so on. To prevent this, a possible solution is to digitally sign binaries you trust, and have the system check their digital signature before running them: if the signature cannot be verified, the binary is declared corrupt and the operating system will not let it run.
There have already been several initiatives in this domain, such as Tripwire, BSign, Cryptomark, and IBM's Signed Executables, but we believe the DigSig project is the first to be both easily accessible to all (available on Sourceforge, under the GPL license) and to operate at the kernel level (see Table 1).
The DigSig Solution
To avoid reinventing the wheel, we based our solution on the existing open source project BSign, a Debian userspace binary signing package. BSign signs the binaries and embeds the signature in the binary itself. Then, at kernel level, DigSig verifies these signatures at execution time and denies execution if the signature is invalid.
Typically, in our approach, binaries are not signed by vendors, rather we hand over control of the system to the local administrator, who is responsible for signing all binaries he or she trusts with his or her private key. Then, those binaries are verified with the corresponding public key. This means you can still use your favorite (signed) binaries: no change in habits. Basically, DigSig guarantees only two things: (1) if you signed a binary, nobody other than you can modify that binary without being detected, and (2) nobody can run a binary that is not signed or is badly signed. Of course, you should be careful not to sign untrusted code: if malicious code is signed, all security benefits are lost.
How Do I Use DigSig?
DigSig is fairly simple to use. First, you need to sign all binaries you trust with BSign (version 0.4.5 or higher). Then you need to load DigSig with the public key that corresponds to the private key used to sign the binaries.
The following shows step by step how to sign the executable "ps":
$ cp 'which ps' ps-test
$ bsign -s ps-test // Sign the binary
$ bsign -V ps-test // Verify the validity of the signature
Next, install the DigSig kernel module. To do so, a recent kernel version is required (2.5.66 or higher), compiled with security options enabled (CONFIG_SECURITY=y). To compile DigSig, assuming your kernel source directory is /usr/src/linux-2.5.66, you do:
$ cd digsig
$ make -C /usr/src/linux-2.5.66 SUBDIRS=$PWD modules
$ cd digsig/tools && make
This builds the DigSig kernel module (digsig_verif.ko), and you're probably already halfway through the command to load it, but wait! If you are not cautious about the following point, you might secure your machine so well you'll basically freeze it. As a matter of fact, once DigSig is loaded, verification of binary signatures is activated. At that time, binaries will be able to run only if their signature is successfully verified. In all other cases (invalid signature, corrupted file, no signature...), execution of the binary will be denied. Consequently, if you forget to sign an essential binary such as /sbin/reboot, or /sbin/rmmod, you'll be most embarrassed to reboot the system if you have to. Therefore, for testing purposes, we recommend you initially run DigSig in debug mode. To do this, compile DigSig with the DSI_DIGSIG_DEBUG and DSI_DEBUG flags set in the Makefile:
EXTRA_CFLAGS += -DDSI_DEBUG -DDSI_DIGSIG_DEBUG -I $(obj)
In debug mode, DigSig lets unsigned binaries run. This state is ideal to test DigSig, and also list the binaries you need to sign to get a fully operational system.
Once this precaution has been taken it's time to load the DigSig module, with your public key as argument. BSign uses GnuPG keys to sign binaries, so retrieve your public key as follows:
$ gpg --export >> my_public_key.pub
Then log as root, and use the digsig.init script to load the module.
# ./digsig.init start my_public_key.pub
Testing if sysfs is mounted in /sys.
Loading Digsig module.
Loading public key.
This is it: signature verification is activated. You can check the signed ps executable (ps-test) works:
# tail -f /var/log/messages
Sep 16 15:49:16 colby kernel: DSI-LSM MODULE - binary is ./ps-test
Sep 16 15:49:16 colby kernel: DSI-LSM MODULE - dsi_bprm_compute_creds: Found signature
Sep 16 15:49:16 colby kernel: DSI-LSM MODULE - dsi_bprm_compute_creds: Signature
But, corrupted executables won't run:
bash: ./ps-corrupt: Operation not permitted
Sep 16 15:55:20 colby kernel: DSI-LSM MODULE - binary is ./ps-corrupt
Sep 16 15:55:20 colby kernel: DSI-LSM MODULE Error - dsi_bprm_compute_creds: Signatures
do not match for ./ps-corrupt
If the permissive debug mode is set, signature verification is skipped for unsigned binaries. Otherwise, the control is strictly enforced in the normal behavior:
bash: ./ps: cannot execute binary file
# tail -f /var/log/messages
Sep 16 16:05:10 colby kernel: DSI-LSM MODULE - binary is ./ps
Sep 16 16:05:10 colby kernel: DSI-LSM MODULE - dsi_bprm_compute_creds:
Signatures do not match
DigSig, Behind the Scenes
The core of DigSig lies in the LSM hooks placed in the kernel's routines for executing a binary. The starting point of any binary execution is a system call to sys_exec(), which triggers do_execve(). This is the transition between user space and kernel space.
The first LSM hook to be called is bprm_alloc_security, where a security structure is optionally attached to the linux_bprm structure that represents the task. DigSig does not use this hook as it doesn't need any specific security structure.
Then, the kernel tries to find a binary handler (search_binary_handler) to load the file. This is when the LSM hook bprm_check_security is called, and precisely when DigSig performs signature verification of the binary. If successful, load_elf_binary() gets called, which eventually calls do_mmap(), then the LSM hook file_mmap(), and finally bprm_free_security().
So, this is how DigSig enforces binary signature verification at kernel level. Now, a brief explanation of the signing mechanism of DigSig's userland counterpart: BSign. When signing an ELF binary, BSign stores the signature in a new section in the binary. To do so, it modifies the ELF's section header table to account for this new section, with the name "signature" and a user defined type 0x80736967 (which comes from the ASCII characters "s", "i", and "g"). You can check your binary's section header table with the command readelf -S <binary>. It then performs a SHA1 hash on the entire file, after having zeroed the additional signature section. Next it prefixes this hash with "#1; bsign v%s" where %s is the version number of BSign, and stores the result at the begining of the binary's signature section. Finally, BSign calls GnuPG to sign the signature section (containing the hash), and stores the signature at the current position of the signature section. A short compatibility note: GnuPG adds a 32-byte timestamp and a signature class identifier in the buffer it signs.
On a cryptographic point of view, DigSig needs to verify BSign's signatures, i.e., RSA signatures. More precisely, this consists in, on one side, hashing the binary with a one-way function (SHA-1) and padding the result (EMSA PKCS1 v1.5), and, on the other side, "decrypting" the signature with the public key and verifying this corresponds to the padded text.
PKCS#1 padding is pretty simple to implement, so we had no problems coding it. Concerning SHA-1 hashing, we used Linux's kernel CryptoAPI:
- We allocate a crypto_tfm structure (crypto_alloc_tfm), and use it to initialize the hashing process (crypto_digest_init).
- Then we read the binary block by block, and feed it to the hashing routine (crypto_digest_update).
- Finally, we retrieve the hash (crypto_digest_final).
- Only the RSA signature verification routines have been kept. For instance, functions to generate large primes have been erased.
- Allocations on the stack have been limited to the strict minimum.
We have performed two different kinds of benchmarks for DigSig: a benchmark of the real impact of DigSig for users (how much they feel the system is slowed down), and a more precise benchmark evaluating the exact overhead induced by our kernel module.
The first set of benchmarks has been performed by comparing how long it takes to run an executable with or without DigSig. To do so, we used the command "time" over fast to longer executions. The following benchmark has been run 20 times:
% time /bin/ls -Al # times /bin/ls
% time ./digsig.init compile # times compilation with gcc
% time tar jxvfp linux-2.6.0-test8.tar.bz2 # times tar
On a Pentium 4, 2.2GHz, with 512MB of RAM, with DigSig using GnuPG's math library, we obtained the results displayed in Table 2. They clearly show that the impact of DigSig is quite important for short executions (such as ls) but soon becomes completely negligible for longer executions such as compiling a project with gcc, or untarring sources with tar.
Second, we measured the exact overhead introduced by our kernel module. To do so, we basically compared jiffies at the beginning and at the end of bprm_check_security. In brief, jiffies represent the number of clock ticks since the system has booted, so they are a precise way to measure time in the Linux kernel. In our case, jiffies are in milliseconds. We have run each binary 30 times (see Table 3) for DigSig compiled with GnuPG.
The results show that, naturally, the digital signature verification overhead increases with the executable's size (which is not a surprise because it takes longer to hash all data).
Finally, to assist us in optimizing our code, we have run Oprofile, a system profiler for Linux, over DigSig (see Table 4). Results clearly indicate that the modular exponentiation routines are the most expensive, so this is where we should concentrate our optimization efforts for future releases. More particularly, we plan to port ASM code of math libraries to the kernel, instead of using pure C code.
Conclusion and Future Work
We've shown how DigSig can help you in mitigating the risk of running malicious code. Our future work will focus on two main areas: performance and features.
Obviously, as signature verification overhead impacts all binaries, it is important to optimize it. There are several paths we might follow such as caching signature verification, sporadically verifying signatures, or optimizing math libraries.
From a feature point of view, we recently implemented digital signature verification of shared libraries: if malicious code is inserted into a library, all executables (even signed ones) that link to this library are compromised, which is a severe limitation. This implementation is currently in the testing phase and will be released soon.
|jackie113 07/11/07 05:07:35 AM EDT|
PlayStation 3 is not only an expensive game console but also an excellent video player. It could play high-def Blu-ray movies in addition to standard DVDs with a Blu-ray drive
The Internet of Things (IoT) is growing rapidly by extending current technologies, products and networks. By 2020, Cisco estimates there will be 50 billion connected devices. Gartner has forecast revenues of over $300 billion, just to IoT suppliers. Now is the time to figure out how you’ll make money – not just create innovative products. With hundreds of new products and companies jumping into the IoT fray every month, there’s no shortage of innovation. Despite this, McKinsey/VisionMobile data shows "less than 10 percent of IoT developers are making enough to support a reasonably sized team....
Nov. 29, 2015 02:00 PM EST Reads: 481
Just over a week ago I received a long and loud sustained applause for a presentation I delivered at this year’s Cloud Expo in Santa Clara. I was extremely pleased with the turnout and had some very good conversations with many of the attendees. Over the next few days I had many more meaningful conversations and was not only happy with the results but also learned a few new things. Here is everything I learned in those three days distilled into three short points.
Nov. 29, 2015 01:00 PM EST Reads: 347
DevOps is about increasing efficiency, but nothing is more inefficient than building the same application twice. However, this is a routine occurrence with enterprise applications that need both a rich desktop web interface and strong mobile support. With recent technological advances from Isomorphic Software and others, rich desktop and tuned mobile experiences can now be created with a single codebase – without compromising functionality, performance or usability. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, demonstrated examples of com...
Nov. 29, 2015 12:45 PM EST Reads: 415
As organizations realize the scope of the Internet of Things, gaining key insights from Big Data, through the use of advanced analytics, becomes crucial. However, IoT also creates the need for petabyte scale storage of data from millions of devices. A new type of Storage is required which seamlessly integrates robust data analytics with massive scale. These storage systems will act as “smart systems” provide in-place analytics that speed discovery and enable businesses to quickly derive meaningful and actionable insights. In his session at @ThingsExpo, Paul Turner, Chief Marketing Officer at...
Nov. 29, 2015 12:30 PM EST Reads: 419
In his keynote at @ThingsExpo, Chris Matthieu, Director of IoT Engineering at Citrix and co-founder and CTO of Octoblu, focused on building an IoT platform and company. He provided a behind-the-scenes look at Octoblu’s platform, business, and pivots along the way (including the Citrix acquisition of Octoblu).
Nov. 29, 2015 12:00 PM EST Reads: 523
In his General Session at 17th Cloud Expo, Bruce Swann, Senior Product Marketing Manager for Adobe Campaign, explored the key ingredients of cross-channel marketing in a digital world. Learn how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects).
Nov. 29, 2015 11:45 AM EST Reads: 323
The Internet of Everything is re-shaping technology trends–moving away from “request/response” architecture to an “always-on” Streaming Web where data is in constant motion and secure, reliable communication is an absolute necessity. As more and more THINGS go online, the challenges that developers will need to address will only increase exponentially. In his session at @ThingsExpo, Todd Greene, Founder & CEO of PubNub, exploreed the current state of IoT connectivity and review key trends and technology requirements that will drive the Internet of Things from hype to reality.
Nov. 29, 2015 09:45 AM EST Reads: 451
Two weeks ago (November 3-5), I attended the Cloud Expo Silicon Valley as a speaker, where I presented on the security and privacy due diligence requirements for cloud solutions. Cloud security is a topical issue for every CIO, CISO, and technology buyer. Decision-makers are always looking for insights on how to mitigate the security risks of implementing and using cloud solutions. Based on the presentation topics covered at the conference, as well as the general discussions heard between sessions, I wanted to share some of my observations on emerging trends. As cyber security serves as a fou...
Nov. 29, 2015 09:15 AM EST Reads: 343
We all know that data growth is exploding and storage budgets are shrinking. Instead of showing you charts on about how much data there is, in his General Session at 17th Cloud Expo, Scott Cleland, Senior Director of Product Marketing at HGST, showed how to capture all of your data in one place. After you have your data under control, you can then analyze it in one place, saving time and resources.
Nov. 29, 2015 08:45 AM EST Reads: 215
With all the incredible momentum behind the Internet of Things (IoT) industry, it is easy to forget that not a single CEO wakes up and wonders if “my IoT is broken.” What they wonder is if they are making the right decisions to do all they can to increase revenue, decrease costs, and improve customer experience – effectively the same challenges they have always had in growing their business. The exciting thing about the IoT industry is now these decisions can be better, faster, and smarter. Now all corporate assets – people, objects, and spaces – can share information about themselves and thei...
Nov. 29, 2015 08:00 AM EST Reads: 270
The cloud. Like a comic book superhero, there seems to be no problem it can’t fix or cost it can’t slash. Yet making the transition is not always easy and production environments are still largely on premise. Taking some practical and sensible steps to reduce risk can also help provide a basis for a successful cloud transition. A plethora of surveys from the likes of IDG and Gartner show that more than 70 percent of enterprises have deployed at least one or more cloud application or workload. Yet a closer inspection at the data reveals less than half of these cloud projects involve production...
Nov. 29, 2015 07:00 AM EST Reads: 498
Continuous processes around the development and deployment of applications are both impacted by -- and a benefit to -- the Internet of Things trend. To help better understand the relationship between DevOps and a plethora of new end-devices and data please welcome Gary Gruver, consultant, author and a former IT executive who has led many large-scale IT transformation projects, and John Jeremiah, Technology Evangelist at Hewlett Packard Enterprise (HPE), on Twitter at @j_jeremiah. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.
Nov. 29, 2015 06:45 AM EST Reads: 742
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true change and transformation possible.
Nov. 29, 2015 06:00 AM EST Reads: 555
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
Nov. 29, 2015 06:00 AM EST Reads: 375
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" in this scenario: microservice A (releases daily) depends on a couple of additions to backend B (re...
Nov. 29, 2015 05:00 AM EST Reads: 460
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound effect on the world, and what should we expect to see over the next couple of years.
Nov. 29, 2015 04:30 AM EST Reads: 484
Container technology is shaping the future of DevOps and it’s also changing the way organizations think about application development. With the rise of mobile applications in the enterprise, businesses are abandoning year-long development cycles and embracing technologies that enable rapid development and continuous deployment of apps. In his session at DevOps Summit, Kurt Collins, Developer Evangelist at Built.io, examined how Docker has evolved into a highly effective tool for application delivery by allowing increasingly popular Mobile Backend-as-a-Service (mBaaS) platforms to quickly crea...
Nov. 29, 2015 04:00 AM EST Reads: 377
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Day 2 Keynote at 17th Cloud Expo, Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, wil...
Nov. 29, 2015 03:00 AM EST Reads: 597
PubNub has announced the release of BLOCKS, a set of customizable microservices that give developers a simple way to add code and deploy features for realtime apps.PubNub BLOCKS executes business logic directly on the data streaming through PubNub’s network without splitting it off to an intermediary server controlled by the customer. This revolutionary approach streamlines app development, reduces endpoint-to-endpoint latency, and allows apps to better leverage the enormous scalability of PubNub’s Data Stream Network.
Nov. 29, 2015 03:00 AM EST Reads: 339
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Ben Perlmutter, a Sales Engineer with IBM Cloudant, demonstrated techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, faster user experience, both offline and online. The focus of this talk was on IBM Cloudant, Apache CouchDB, and ...
Nov. 29, 2015 02:45 AM EST Reads: 425