Welcome!

Linux Containers Authors: Elizabeth White, Carmen Gonzalez, Liz McMillan, Pat Romanski, Jyoti Bansal

Related Topics: Linux Containers

Linux Containers: Article

The Challenges of the Linux Audit

Steps for a secure system

As a decision maker in your IT organization, you're aware that your Linux systems share is growing (if your enterprise follows today's business trend). Linux installations are now available on every major hardware platform. New projects in development include Linux systems in an increasing share, and you're challenged with incorporating these Linux systems seamlessly into your operations and business processing.

These Linux systems must also now be included as part of your IT audit. IT audits are increasingly performed by cross-functional teams rather than by operations, networks, applications, or database management teams. The cross-functional audit teams have the scope and purview to examine each area of operations. Since your skilled operations teams aren't responsible for policing their own house, they can remain focused on their core skill sets.

The audit teams make scheduled passes, with strategic focus on physical security, network security, applications security, systems security, and whatever else is part of your enterprise security plan. The report is digested and parsed by the audit team leader or information security manager, who tactfully disseminates the information to the appropriate team leaders.

The first challenge emerging from this vision of corporate information systems unity is that the operations teams will potentially mistrust, hate, fear, or otherwise loathe the audit teams. This humanistic certainty is based on the perception that someone is trying to find something wrong so that blame can be assigned. Overcoming this challenge, while not a typical strategic audit goal, is important since you want the audit teams to have unfettered access, and you want their work to be supported and adopted by the operations teams. The audit teams' reports must become meaningful input for operations teams, who will review a report and mitigate the threats instead of putting out fires later because important audit information was not heeded.

Using your vision, sensibility, and other executive powers, you've attained respectful buy-in from the teams - you can now move forward to meet other challenges.

The Audit

One problem identified during Linux audits is that too many people know the root password and other elevated-privilege account passwords. These passwords are the electronic keys to the kingdom in Linux, and taking back control of these accounts is a top audit priority. Typically, everyone who has the root password knows why they shouldn't pass it out or overuse it.

There's limited accountability in most native Linux operating systems, including the lack of a cogent audit trail. The native auditability is primarily centered around the syslog and sulog facilities, which cannot describe the interactive actions of the root user with the system at the level required by the HIPPA, Sarbanes-Oxley, and NISPOM Chapter 8 requirements, to mention only a few. For example, Figure 1 shows a sample sulog, revealing a not very detailed snapshot of users using su on a system.

While they're better than nothing, the sample log entries don't describe what actions were taken after the SU command occurred. (For the uninitiated, the + or - tells you if the SU request was successful.)

The syslog example may be roughly equivalent (see Figure 2).

The example in Figure 2 also indicates privilege being elevated, but does not describe (or require) a reason. Additionally, the file(s) produced by the syslog daemon may contain information not germane to your audit, but again, some information is certainly better than nothing. You can significantly improve the auditability in your enterprise by adding third-party software that captures all standard input, output, and errors, including everything the user does with the elevated privilege.

The example below is from a policy created on a Linux system (salmon.mydomain. com), using a Symark product called PowerBroker, (version 3.2.1). It provides a root shell for any user authorized to run the command pbrun GIMMIEROOT. The policy creates an audit file akin to others available in some third-party products to give you more auditability when users gain or use elevated privilege. This particular product will log all standard input, output, and errors, as well as a complete report regarding the secured task:

$ pbrun GIMMIEROOT
Enter your reason for accessing this policy:
I need to edit the /etc/passwd file

Figure 3 shows what the resultant logfile includes. Note that the "who, what, when, where, and why" are evident in the log output.

I truncated the log file, but you can see that your audit team has the ability to see it, and to tell the who, what, when, where, and why for any elevated-privilege or vital-asset access. In addition to third-party products, Linux vendors are working hard to provide this functionality. This functionality significantly improves your teams' ability to take back the root and other elevated-privilege accounts by granting elevated privilege only when the user accesses certain commands or assets (within their normal job descriptions, for example). When access is complete, normal privilege resumes, and the user never knows the elevated password.

So you're familiar with elevated-access audit control; is your audit team is as well? Basic audit tenants include reading the documentation to determine what to audit, but what documentation do you have that describes who can access what, when, where, and why?

Your systems, applications, and networks team can collaborate to create a document like Table 1.

Your teams may have used any visualization method, but the output is a matrix of your systems (vertical axis), and your user community (horizontal axis). Notice that each login/access method is described, as well as which system each user can access, from which system, by which method. Once users are on the systems, executable commands are listed, as well as any elevated privilege required. With this documentation, your audit team now knows which systems to go to, which accounts to scrutinize, which commands should normally be allowed as the user, and which commands require elevated privilege. This documentation is simple but effective in meeting the requirement to report upward and manage outward.

Another important problem that surfaces in a Linux audit is the publication of passwords, which often happens inadvertently via secure applications scripts (Web startup or shutdown, middleware startup or shutdown, database startup or shutdown, etc.).

Information synchronization routines (such as NIS or LDAP v2) also place assets at risk, as they pass account, system, and other enterprise information around the LAN or WAN in clear case. (In the case of passwords specifically, the encrypted value is sent, but agile information bandits know the difference between a crypt, bigcrypt, or MD-5 hash. When the rest of the information is in clear case, encrypting only the password may provide little safety.)

Once passwords are obtained by a nontrusted source (someone leaves a file containing a password world-readable, for example), valuable assets are at risk on numerous fronts, including easy access to critical files/data. When an asset can be accessed by a user in masquerade, the asset is at risk. The insertion of a Trojan program, the destruction of an application, and the alteration of data are all undesirable options. Whether compromised by the pad of paper in the machine room, the e-mail to the group alias with a defunct (but still receptive) recipient, the generic account password used by consultants nationwide when installing the new software on your enterprise server, or some other method, the untrusted source now has the ability to log in to one or more systems as someone other than themselves. No audit could save you at this point, as activity performed under the guise of a trusted user is now suspect.

Fortunately, your systems audit includes the regular checking for ownership, permissions, checksums, and other embedded safety mechanisms to keep data and applications in a known good state. Program files, executables, even operating system and patch levels are being recorded and compared from audit to audit, and maintained at the most current secure levels. The LDAP directory is scrutinized for the dysfunction that occurs between Human Resources and Information Systems, causing transferred or even terminated employees to be removed to systems, but allowed to remain in the LDAP directory. This step eliminates the ability for a transferred or terminated employee to gain access to assets via an LDAP-credentialed application. You have delegated and empowered effectively, your audit team is passing back the appropriate report to the systems managers, and the integrity of the systems and programs is secure.

Conclusion

As a quick summary, your internal teams periodically perform these audits:
  • Physical security
  • Operating system
  • Network security
  • Others as you require
Each team has a specific focus and reports to you for dissemination and mitigation. A periodic review of your documentation will reveal newly emerging systems, network components, or applications requiring audits, and your appropriate team will incorporate them as needed. The process feeds itself, as each successive audit both addresses issues and reveals an emerging strength of operations as a cohesive unit, with assets protected in concentric rings of recurring audits.

Your charter to your auditors is multifold, as they assess each aspect of today's increasingly complex information systems nervous system. The audits should be periodic, focused on a specific aspect of the larger picture, and as unintrusive as possible. They should yield a systematic and repeatable report, which is then passed back into the system for assessment and mitigation. Your audit teams use a documentation tool to determine who, what, and how to audit your assets, and the result is that the external audit becomes a quality checkpoint rather than an item causing worry, fear, or loathing.

More Stories By Richard Williams

Richard Williams is director of education for Symark Software in Agoura Hills, California. With over 20 years of experience in systems administration, architecture, and design, Richard oversees the development and delivery of Symark's University Training Program in providing customer support to global enterprise customers.

Comments (2) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
John Legg 05/13/04 08:26:22 PM EDT

An impressive solution to Linux (as well as Unix) audits is at www.mase.com Many standard policies as well as customized ones can be monitored very quick and painlessly.

Mark Post 04/22/04 04:42:51 PM EDT

The author apparently isn't familiar with SSL/TLS support in OpenLDAP. Nothing has to pass in clear text when using that feature.

@ThingsExpo Stories
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
WebRTC is the future of browser-to-browser communications, and continues to make inroads into the traditional, difficult, plug-in web communications world. The 6th WebRTC Summit continues our tradition of delivering the latest and greatest presentations within the world of WebRTC. Topics include voice calling, video chat, P2P file sharing, and use cases that have already leveraged the power and convenience of WebRTC.
Discover top technologies and tools all under one roof at April 24–28, 2017, at the Westin San Diego in San Diego, CA. Explore the Mobile Dev + Test and IoT Dev + Test Expo and enjoy all of these unique opportunities: The latest solutions, technologies, and tools in mobile or IoT software development and testing. Meet one-on-one with representatives from some of today's most innovative organizations
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in Embedded and IoT solutions, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 7-9, 2017, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and E...
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
WebRTC sits at the intersection between VoIP and the Web. As such, it poses some interesting challenges for those developing services on top of it, but also for those who need to test and monitor these services. In his session at WebRTC Summit, Tsahi Levent-Levi, co-founder of testRTC, reviewed the various challenges posed by WebRTC when it comes to testing and monitoring and on ways to overcome them.
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
WebRTC services have already permeated corporate communications in the form of videoconferencing solutions. However, WebRTC has the potential of going beyond and catalyzing a new class of services providing more than calls with capabilities such as mass-scale real-time media broadcasting, enriched and augmented video, person-to-machine and machine-to-machine communications. In his session at @ThingsExpo, Luis Lopez, CEO of Kurento, introduced the technologies required for implementing these idea...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud enviro...
"A lot of times people will come to us and have a very diverse set of requirements or very customized need and we'll help them to implement it in a fashion that you can't just buy off of the shelf," explained Nick Rose, CTO of Enzu, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
The WebRTC Summit New York, to be held June 6-8, 2017, at the Javits Center in New York City, NY, announces that its Call for Papers is now open. Topics include all aspects of improving IT delivery by eliminating waste through automated business models leveraging cloud technologies. WebRTC Summit is co-located with 20th International Cloud Expo and @ThingsExpo. WebRTC is the future of browser-to-browser communications, and continues to make inroads into the traditional, difficult, plug-in web co...
In his keynote at @ThingsExpo, Chris Matthieu, Director of IoT Engineering at Citrix and co-founder and CTO of Octoblu, focused on building an IoT platform and company. He provided a behind-the-scenes look at Octoblu’s platform, business, and pivots along the way (including the Citrix acquisition of Octoblu).
For basic one-to-one voice or video calling solutions, WebRTC has proven to be a very powerful technology. Although WebRTC’s core functionality is to provide secure, real-time p2p media streaming, leveraging native platform features and server-side components brings up new communication capabilities for web and native mobile applications, allowing for advanced multi-user use cases such as video broadcasting, conferencing, and media recording.
Web Real-Time Communication APIs have quickly revolutionized what browsers are capable of. In addition to video and audio streams, we can now bi-directionally send arbitrary data over WebRTC's PeerConnection Data Channels. With the advent of Progressive Web Apps and new hardware APIs such as WebBluetooh and WebUSB, we can finally enable users to stitch together the Internet of Things directly from their browsers while communicating privately and securely in a decentralized way.
WebRTC is about the data channel as much as about video and audio conferencing. However, basically all commercial WebRTC applications have been built with a focus on audio and video. The handling of “data” has been limited to text chat and file download – all other data sharing seems to end with screensharing. What is holding back a more intensive use of peer-to-peer data? In her session at @ThingsExpo, Dr Silvia Pfeiffer, WebRTC Applications Team Lead at National ICT Australia, looked at differ...
The security needs of IoT environments require a strong, proven approach to maintain security, trust and privacy in their ecosystem. Assurance and protection of device identity, secure data encryption and authentication are the key security challenges organizations are trying to address when integrating IoT devices. This holds true for IoT applications in a wide range of industries, for example, healthcare, consumer devices, and manufacturing. In his session at @ThingsExpo, Lancen LaChance, vic...
With all the incredible momentum behind the Internet of Things (IoT) industry, it is easy to forget that not a single CEO wakes up and wonders if “my IoT is broken.” What they wonder is if they are making the right decisions to do all they can to increase revenue, decrease costs, and improve customer experience – effectively the same challenges they have always had in growing their business. The exciting thing about the IoT industry is now these decisions can be better, faster, and smarter. Now ...
Fact is, enterprises have significant legacy voice infrastructure that’s costly to replace with pure IP solutions. How can we bring this analog infrastructure into our shiny new cloud applications? There are proven methods to bind both legacy voice applications and traditional PSTN audio into cloud-based applications and services at a carrier scale. Some of the most successful implementations leverage WebRTC, WebSockets, SIP and other open source technologies. In his session at @ThingsExpo, Da...
Who are you? How do you introduce yourself? Do you use a name, or do you greet a friend by the last four digits of his social security number? Assuming you don’t, why are we content to associate our identity with 10 random digits assigned by our phone company? Identity is an issue that affects everyone, but as individuals we don’t spend a lot of time thinking about it. In his session at @ThingsExpo, Ben Klang, Founder & President of Mojo Lingo, discussed the impact of technology on identity. Sho...
A critical component of any IoT project is what to do with all the data being generated. This data needs to be captured, processed, structured, and stored in a way to facilitate different kinds of queries. Traditional data warehouse and analytical systems are mature technologies that can be used to handle certain kinds of queries, but they are not always well suited to many problems, particularly when there is a need for real-time insights.