|By Makan Pourzandi, Axelle Apvrille, David Gordon, Vincent Roy||
|December 22, 2003 12:00 AM EST||
This article presents a Linux kernel module capable of verifying digital signatures of ELF binaries before running them. This kernel module is available under the GPL license at http://sourceforge.net/projects/disec, and has been successfully tested for kernel 2.5.66 and above.
Why Check the Signature of Your Binaries Before Running Them?
The problem with blindly running executables is that you are never sure they actually do what you think they are supposed to do (and nothing more). Viruses spread so much on Microsoft Windows systems mainly because users are frantic to execute whatever they receive, especially if the title is appealing. The LoveLetter virus, with over 2.5 million machines infected, is a famous illustration of this. Yet Linux is unfortunately not immune to malicious code either. By executing unknown and untrusted code, users are exposed to a wide range of Unix worms, viruses, trojans, backdoors, and so on. To prevent this, a possible solution is to digitally sign binaries you trust, and have the system check their digital signature before running them: if the signature cannot be verified, the binary is declared corrupt and the operating system will not let it run.
There have already been several initiatives in this domain, such as Tripwire, BSign, Cryptomark, and IBM's Signed Executables, but we believe the DigSig project is the first to be both easily accessible to all (available on Sourceforge, under the GPL license) and to operate at the kernel level (see Table 1).
The DigSig Solution
To avoid reinventing the wheel, we based our solution on the existing open source project BSign, a Debian userspace binary signing package. BSign signs the binaries and embeds the signature in the binary itself. Then, at kernel level, DigSig verifies these signatures at execution time and denies execution if the signature is invalid.
Typically, in our approach, binaries are not signed by vendors, rather we hand over control of the system to the local administrator, who is responsible for signing all binaries he or she trusts with his or her private key. Then, those binaries are verified with the corresponding public key. This means you can still use your favorite (signed) binaries: no change in habits. Basically, DigSig guarantees only two things: (1) if you signed a binary, nobody other than you can modify that binary without being detected, and (2) nobody can run a binary that is not signed or is badly signed. Of course, you should be careful not to sign untrusted code: if malicious code is signed, all security benefits are lost.
How Do I Use DigSig?
DigSig is fairly simple to use. First, you need to sign all binaries you trust with BSign (version 0.4.5 or higher). Then you need to load DigSig with the public key that corresponds to the private key used to sign the binaries.
The following shows step by step how to sign the executable "ps":
$ cp 'which ps' ps-test
$ bsign -s ps-test // Sign the binary
$ bsign -V ps-test // Verify the validity of the signature
Next, install the DigSig kernel module. To do so, a recent kernel version is required (2.5.66 or higher), compiled with security options enabled (CONFIG_SECURITY=y). To compile DigSig, assuming your kernel source directory is /usr/src/linux-2.5.66, you do:
$ cd digsig
$ make -C /usr/src/linux-2.5.66 SUBDIRS=$PWD modules
$ cd digsig/tools && make
This builds the DigSig kernel module (digsig_verif.ko), and you're probably already halfway through the command to load it, but wait! If you are not cautious about the following point, you might secure your machine so well you'll basically freeze it. As a matter of fact, once DigSig is loaded, verification of binary signatures is activated. At that time, binaries will be able to run only if their signature is successfully verified. In all other cases (invalid signature, corrupted file, no signature...), execution of the binary will be denied. Consequently, if you forget to sign an essential binary such as /sbin/reboot, or /sbin/rmmod, you'll be most embarrassed to reboot the system if you have to. Therefore, for testing purposes, we recommend you initially run DigSig in debug mode. To do this, compile DigSig with the DSI_DIGSIG_DEBUG and DSI_DEBUG flags set in the Makefile:
EXTRA_CFLAGS += -DDSI_DEBUG -DDSI_DIGSIG_DEBUG -I $(obj)
In debug mode, DigSig lets unsigned binaries run. This state is ideal to test DigSig, and also list the binaries you need to sign to get a fully operational system.
Once this precaution has been taken it's time to load the DigSig module, with your public key as argument. BSign uses GnuPG keys to sign binaries, so retrieve your public key as follows:
$ gpg --export >> my_public_key.pub
Then log as root, and use the digsig.init script to load the module.
# ./digsig.init start my_public_key.pub
Testing if sysfs is mounted in /sys.
Loading Digsig module.
Loading public key.
This is it: signature verification is activated. You can check the signed ps executable (ps-test) works:
# tail -f /var/log/messages
Sep 16 15:49:16 colby kernel: DSI-LSM MODULE - binary is ./ps-test
Sep 16 15:49:16 colby kernel: DSI-LSM MODULE - dsi_bprm_compute_creds: Found signature
Sep 16 15:49:16 colby kernel: DSI-LSM MODULE - dsi_bprm_compute_creds: Signature
But, corrupted executables won't run:
bash: ./ps-corrupt: Operation not permitted
Sep 16 15:55:20 colby kernel: DSI-LSM MODULE - binary is ./ps-corrupt
Sep 16 15:55:20 colby kernel: DSI-LSM MODULE Error - dsi_bprm_compute_creds: Signatures
do not match for ./ps-corrupt
If the permissive debug mode is set, signature verification is skipped for unsigned binaries. Otherwise, the control is strictly enforced in the normal behavior:
bash: ./ps: cannot execute binary file
# tail -f /var/log/messages
Sep 16 16:05:10 colby kernel: DSI-LSM MODULE - binary is ./ps
Sep 16 16:05:10 colby kernel: DSI-LSM MODULE - dsi_bprm_compute_creds:
Signatures do not match
DigSig, Behind the Scenes
The core of DigSig lies in the LSM hooks placed in the kernel's routines for executing a binary. The starting point of any binary execution is a system call to sys_exec(), which triggers do_execve(). This is the transition between user space and kernel space.
The first LSM hook to be called is bprm_alloc_security, where a security structure is optionally attached to the linux_bprm structure that represents the task. DigSig does not use this hook as it doesn't need any specific security structure.
Then, the kernel tries to find a binary handler (search_binary_handler) to load the file. This is when the LSM hook bprm_check_security is called, and precisely when DigSig performs signature verification of the binary. If successful, load_elf_binary() gets called, which eventually calls do_mmap(), then the LSM hook file_mmap(), and finally bprm_free_security().
So, this is how DigSig enforces binary signature verification at kernel level. Now, a brief explanation of the signing mechanism of DigSig's userland counterpart: BSign. When signing an ELF binary, BSign stores the signature in a new section in the binary. To do so, it modifies the ELF's section header table to account for this new section, with the name "signature" and a user defined type 0x80736967 (which comes from the ASCII characters "s", "i", and "g"). You can check your binary's section header table with the command readelf -S <binary>. It then performs a SHA1 hash on the entire file, after having zeroed the additional signature section. Next it prefixes this hash with "#1; bsign v%s" where %s is the version number of BSign, and stores the result at the begining of the binary's signature section. Finally, BSign calls GnuPG to sign the signature section (containing the hash), and stores the signature at the current position of the signature section. A short compatibility note: GnuPG adds a 32-byte timestamp and a signature class identifier in the buffer it signs.
On a cryptographic point of view, DigSig needs to verify BSign's signatures, i.e., RSA signatures. More precisely, this consists in, on one side, hashing the binary with a one-way function (SHA-1) and padding the result (EMSA PKCS1 v1.5), and, on the other side, "decrypting" the signature with the public key and verifying this corresponds to the padded text.
PKCS#1 padding is pretty simple to implement, so we had no problems coding it. Concerning SHA-1 hashing, we used Linux's kernel CryptoAPI:
- We allocate a crypto_tfm structure (crypto_alloc_tfm), and use it to initialize the hashing process (crypto_digest_init).
- Then we read the binary block by block, and feed it to the hashing routine (crypto_digest_update).
- Finally, we retrieve the hash (crypto_digest_final).
- Only the RSA signature verification routines have been kept. For instance, functions to generate large primes have been erased.
- Allocations on the stack have been limited to the strict minimum.
We have performed two different kinds of benchmarks for DigSig: a benchmark of the real impact of DigSig for users (how much they feel the system is slowed down), and a more precise benchmark evaluating the exact overhead induced by our kernel module.
The first set of benchmarks has been performed by comparing how long it takes to run an executable with or without DigSig. To do so, we used the command "time" over fast to longer executions. The following benchmark has been run 20 times:
% time /bin/ls -Al # times /bin/ls
% time ./digsig.init compile # times compilation with gcc
% time tar jxvfp linux-2.6.0-test8.tar.bz2 # times tar
On a Pentium 4, 2.2GHz, with 512MB of RAM, with DigSig using GnuPG's math library, we obtained the results displayed in Table 2. They clearly show that the impact of DigSig is quite important for short executions (such as ls) but soon becomes completely negligible for longer executions such as compiling a project with gcc, or untarring sources with tar.
Second, we measured the exact overhead introduced by our kernel module. To do so, we basically compared jiffies at the beginning and at the end of bprm_check_security. In brief, jiffies represent the number of clock ticks since the system has booted, so they are a precise way to measure time in the Linux kernel. In our case, jiffies are in milliseconds. We have run each binary 30 times (see Table 3) for DigSig compiled with GnuPG.
The results show that, naturally, the digital signature verification overhead increases with the executable's size (which is not a surprise because it takes longer to hash all data).
Finally, to assist us in optimizing our code, we have run Oprofile, a system profiler for Linux, over DigSig (see Table 4). Results clearly indicate that the modular exponentiation routines are the most expensive, so this is where we should concentrate our optimization efforts for future releases. More particularly, we plan to port ASM code of math libraries to the kernel, instead of using pure C code.
Conclusion and Future Work
We've shown how DigSig can help you in mitigating the risk of running malicious code. Our future work will focus on two main areas: performance and features.
Obviously, as signature verification overhead impacts all binaries, it is important to optimize it. There are several paths we might follow such as caching signature verification, sporadically verifying signatures, or optimizing math libraries.
From a feature point of view, we recently implemented digital signature verification of shared libraries: if malicious code is inserted into a library, all executables (even signed ones) that link to this library are compromised, which is a severe limitation. This implementation is currently in the testing phase and will be released soon.
|jackie113 07/11/07 05:07:35 AM EDT|
PlayStation 3 is not only an expensive game console but also an excellent video player. It could play high-def Blu-ray movies in addition to standard DVDs with a Blu-ray drive
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be.
Sep. 5, 2015 01:00 AM EDT Reads: 249
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advanced analytics, and DevOps to advance innovation and increase agility. Specializing in designing, imple...
Sep. 5, 2015 01:00 AM EDT Reads: 405
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome,” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
Sep. 4, 2015 06:45 PM EDT Reads: 459
With the proliferation of connected devices underpinning new Internet of Things systems, Brandon Schulz, Director of Luxoft IoT – Retail, will be looking at the transformation of the retail customer experience in brick and mortar stores in his session at @ThingsExpo. Questions he will address include: Will beacons drop to the wayside like QR codes, or be a proximity-based profit driver? How will the customer experience change in stores of all types when everything can be instrumented and analyzed? As an area of investment, how might a retail company move towards an innovation methodolo...
Sep. 4, 2015 04:15 PM EDT Reads: 537
Contrary to mainstream media attention, the multiple possibilities of how consumer IoT will transform our everyday lives aren’t the only angle of this headline-gaining trend. There’s a huge opportunity for “industrial IoT” and “Smart Cities” to impact the world in the same capacity – especially during critical situations. For example, a community water dam that needs to release water can leverage embedded critical communications logic to alert the appropriate individuals, on the right device, as soon as they are needed to take action.
Sep. 4, 2015 04:00 PM EDT Reads: 101
Manufacturing connected IoT versions of traditional products requires more than multiple deep technology skills. It also requires a shift in mindset, to realize that connected, sensor-enabled “things” act more like services than what we usually think of as products. In his session at @ThingsExpo, David Friedman, CEO and co-founder of Ayla Networks, will discuss how when sensors start generating detailed real-world data about products and how they’re being used, smart manufacturers can use the data to create additional revenue streams, such as improved warranties or premium features. Or slash...
Sep. 4, 2015 04:00 PM EDT
WebRTC services have already permeated corporate communications in the form of videoconferencing solutions. However, WebRTC has the potential of going beyond and catalyzing a new class of services providing more than calls with capabilities such as mass-scale real-time media broadcasting, enriched and augmented video, person-to-machine and machine-to-machine communications. In his session at @ThingsExpo, Luis Lopez, CEO of Kurento, will introduce the technologies required for implementing these ideas and some early experiments performed in the Kurento open source software community in areas ...
Sep. 4, 2015 03:45 PM EDT Reads: 153
While many app developers are comfortable building apps for the smartphone, there is a whole new world out there. In his session at @ThingsExpo, Narayan Sainaney, Co-founder and CTO of Mojio, will discuss how the business case for connected car apps is growing and, with open platform companies having already done the heavy lifting, there really is no barrier to entry.
Sep. 4, 2015 03:00 PM EDT Reads: 233
As more intelligent IoT applications shift into gear, they’re merging into the ever-increasing traffic flow of the Internet. It won’t be long before we experience bottlenecks, as IoT traffic peaks during rush hours. Organizations that are unprepared will find themselves by the side of the road unable to cross back into the fast lane. As billions of new devices begin to communicate and exchange data – will your infrastructure be scalable enough to handle this new interconnected world?
Sep. 4, 2015 02:00 PM EDT Reads: 278
The Internet of Things is in the early stages of mainstream deployment but it promises to unlock value and rapidly transform how organizations manage, operationalize, and monetize their assets. IoT is a complex structure of hardware, sensors, applications, analytics and devices that need to be able to communicate geographically and across all functions. Once the data is collected from numerous endpoints, the challenge then becomes converting it into actionable insight.
Sep. 4, 2015 12:30 PM EDT Reads: 126
With the Apple Watch making its way onto wrists all over the world, it’s only a matter of time before it becomes a staple in the workplace. In fact, Forrester reported that 68 percent of technology and business decision-makers characterize wearables as a top priority for 2015. Recognizing their business value early on, FinancialForce.com was the first to bring ERP to wearables, helping streamline communication across front and back office functions. In his session at @ThingsExpo, Kevin Roberts, GM of Platform at FinancialForce.com, will discuss the value of business applications on wearable ...
Sep. 4, 2015 12:00 PM EDT Reads: 117
SYS-CON Events announced today that IceWarp will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. IceWarp, the leader of cloud and on-premise messaging, delivers secured email, chat, documents, conferencing and collaboration to today's mobile workforce, all in one unified interface
Sep. 4, 2015 12:00 PM EDT Reads: 507
SYS-CON Events announced today that Micron Technology, Inc., a global leader in advanced semiconductor systems, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Micron’s broad portfolio of high-performance memory technologies – including DRAM, NAND and NOR Flash – is the basis for solid state drives, modules, multichip packages and other system solutions. Backed by more than 35 years of technology leadership, Micron's memory solutions enable the world's most innovative computing, consumer,...
Sep. 4, 2015 12:00 PM EDT Reads: 301
As more and more data is generated from a variety of connected devices, the need to get insights from this data and predict future behavior and trends is increasingly essential for businesses. Real-time stream processing is needed in a variety of different industries such as Manufacturing, Oil and Gas, Automobile, Finance, Online Retail, Smart Grids, and Healthcare. Azure Stream Analytics is a fully managed distributed stream computation service that provides low latency, scalable processing of streaming data in the cloud with an enterprise grade SLA. It features built-in integration with Azur...
Sep. 4, 2015 11:45 AM EDT Reads: 407
17th Cloud Expo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service.
Sep. 4, 2015 11:00 AM EDT Reads: 1,620
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on demos and comprehensive walkthroughs.
Sep. 4, 2015 11:00 AM EDT Reads: 438
SYS-CON Events announced today that the "Second Containers & Microservices Expo" will take place November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
Sep. 4, 2015 10:45 AM EDT Reads: 671
Akana has announced the availability of the new Akana Healthcare Solution. The API-driven solution helps healthcare organizations accelerate their transition to being secure, digitally interoperable businesses. It leverages the Health Level Seven International Fast Healthcare Interoperability Resources (HL7 FHIR) standard to enable broader business use of medical data. Akana developed the Healthcare Solution in response to healthcare businesses that want to increase electronic, multi-device access to health records while reducing operating costs and complying with government regulations.
Sep. 4, 2015 09:30 AM EDT Reads: 341
Containers are not new, but renewed commitments to performance, flexibility, and agility have propelled them to the top of the agenda today. By working without the need for virtualization and its overhead, containers are seen as the perfect way to deploy apps and services across multiple clouds. Containers can handle anything from file types to operating systems and services, including microservices. What are microservices? Unlike what the name implies, microservices are not necessarily small, but are focused on specific tasks. The ability for developers to deploy multiple containers – thous...
Sep. 4, 2015 09:00 AM EDT Reads: 223
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal an...
Sep. 4, 2015 08:15 AM EDT Reads: 2,052