Click here to close now.


Linux Containers Authors: Pat Romanski, Flint Brenton, Liz McMillan, Elizabeth White, David Dodd

Related Topics: Linux Containers

Linux Containers: Blog Post

Create Linux User Login Monitor on Monitis

Monitis provides the ability to monitor almost any operation on your server

Monitis provides the ability to monitor almost any operation on your server.  Using simple Linux tools and scripts you are able to monitor each time a user logs into the server and capture various information, including username, host address and login service.  Using pam_script and bash scripts, you are able to transmit information to a Custom Monitor with this information.

API Access

The first thing you will need in order to create this monitor is the Monitis API Key and Secret Key.  The API Key is a alphanumeric code that allows you to access the Monitis API url’s and transmit or receive data about your Monitis services.  The Secret Key is an alphanumeric code that allows you to digitally sign your information to ensure that only you can transmit data to your Monitis account.  Your API Key may be disclosed to anyone, but your Secret Key must be maintained private and should not be shared nor transmitted.  To obtain your Monitis API Key and Secret Key, log into your account and from the top menu bar, go to Tools then API then API Key, it will display both your API Key and your Secret Key.

Now let’s test your API access.  You should be able to connect and get an Auth Token:

curl '[API Key]&secretkey=[Secret Key]&version=2'

In the above command you should replace [API Key] and [Secret Key] with your API Key and Secret Key.  We are using curl in order to connect to and access the API to get a Auth Token.  The return value is json and sends back something similar to:


Where the alphanumeric code will be your Auth Token.  You can use your Auth Token to validate against the API later.   However sending your Secret Key is not extremely secure, others could possibly  obtain your Secret Key this way.  The more secure method of authenticating is to send your data using POST instead of GET and using a Base64-encoded RFC 2104-compliant HMAC signature to sign the post data.  The signature is sent in the checksum parameter of the POST data.  To calculate the checksum you must follow these rules:
  1. sort all parameters alphabetically by name (excluding the checksum parameter)
  2. concat all parameter names and values like this: name1value1name2value2…
  3. create Base64-encoded RFC 2104-compliant HMAC signature using Secret Key

The final rule can be calculated using openssl:

echo -en “name1value1name2value2” | openssl dgst -sha1 -hmac [Secret Key] -binary | openssl enc -base64

Creating a Custom Monitor

In order to create a custom monitor, you must send a POST request to the API.  This POST request must contain several parameters: action, name, resultsParams, and tag (refer to for specifications).  We will use the following specifications for the params:

  • action=addMonitor
  • name=Login Monitor
  • resultsParam=user_login:Login Name:logins:3;host:Host Address:hostaddress:3;srv:Service:service:3
  • tag=loginMonitor

There is other necessary information in order to communicate with the API:

  • apikey=[API Key]
  • timestamp=[Current UTC time]
  • version=2
In order to create our new monitor called Login Monitor we would post this data plus a checksum to which is the Custom Monitor API url.  Here is a simple script that will accomplish this:

# create a Custom Monitor for Monitis
# Be sure to modify the API Key and Secret Key
NAME="login monitor"
RESULTPARAMS="user_login:Login Name:logins:3;host:Host Address:hostaddress:3;srv:Service:service:3"
TIMESTAMP=`date -u +"%F %T"`
SECRETKEY="[Secret Key]"

# Create Checksum
CHECKSUM_STR="action"$ACTION"apikey"$APIKEY"name"$NAME"resultParams"$RESULTPARAMS"tag"$ TAG"timestamp"$TIMESTAMP"version"$VERSION
CHECKSUM=$(echo -en $CHECKSUM_STR | openssl dgst -sha1 -hmac $SECRETKEY -binary | openssl enc -base64 )

# Post Data to API
POSTDATA="--data-urlencode \"action="$ACTION"\" --data-urlencode \"apikey="$APIKEY"\" --data-urlencode \"name="$NAME"\" --data-urlencode \"resultParams="$RESULTPARAMS"\" --data-urlencode \"tag="$TAG"\" --data-urlencode \"timestamp=$TIMESTAMP\" --data-urlencode \"version="$VERSION"\" --data-urlencode \"checksum="$CHECKSUM"\""

eval "curl ${POSTDATA} $URL"

Save the above script into a file called, be sure not to change the order of the variables in the checksum calculation as they must be in alphabetical order.  Ensure to make this file executable:

chmod 755

Now run it:

The output should look similar to this:


This is showing us that the monitor was successfully created and that the id of the resulting monitor is 305.  If you go to your Monitis account now, you will be able to access this monitor.  From the top level menu, go to Monitors then Manage Monitors and then Custom Monitors.  Here you should find the Login Monitor.  Click the check box next to the title and then click Add to Window.  A window will pop up below the Custom Monitors dialog box.  Close the Custom Monitors dialog box and you will see your new monitor there.  But no data has been sent to it, so it is not that interesting.

Sending Data to Custom Monitor

In order to send data to your Custom Monitor, you must provide the action, monitorId, checktime, and results (refer to for specifications).  The action is addResult, the monitorId is the id that was returned to us in the previous example (If you forgot the id, don’t worry we will get it back), the checktime is the timestamp of the results data, and the results is a string of the parameters and values in this format: name1value1;name2value2

The following script will send data to your Custom Monitor:

# add result to Custom Monitor for Monitis

cat << EOF
usage: $0 options

This script will add results to a Custom Monitis Monitor.

-h Show this message
-a api key
-s secret key
-m monitor tag
-i monitor id
-t timestamp (defaults to utc now)
-r results name:value[;name2:value2...]

CHECKTIME=`date -u +"%s"000`
TIMESTAMP=`date -u +"%F %T"`

while getopts "ha:s:m:i:t:r:s:" OPTION
case $OPTION in

exit 1

if [[ -z $APIKEY ]] || [[ -z $SECRETKEY ]] || [[ -z $MONITOR$ID ]] || [[ -z $RESULTS ]] || [[ -z $CHECKTIME ]]
exit 1

# Get id of monitor if not provided
if [[ -z $ID ]]
XMLID=$(curl -s "$URL?apikey=$APIKEY&output=$OUTPUT&version=$VERSION&action=getMonitors&tag=$MONITOR" | xpath -q -e /monitors/monitor/id)

# Add monitor result
# Create Checksum
CHECKSUM_STR="action"$ACTION"apikey"$APIKEY"checktime"$CHECKTIME"monitorId"$ID"results"$ RESULTS"timestamp"$TIMESTAMP"version"$VERSION
CHECKSUM=$(echo -en $CHECKSUM_STR | openssl dgst -sha1 -hmac $SECRETKEY -binary | openssl enc -base64 )
# Post Data to API

POSTDATA="--data-urlencode \"action="$ACTION"\" --data-urlencode \"apikey="$APIKEY"\" --data-urlencode \"checktime="$CHECKTIME"\" --data-urlencode \"monitorId="$ID"\" --data-urlencode \"results="$RESULTS"\" --data-urlencode \"timestamp=$TIMESTAMP\" --data-urlencode \"version="$VERSION"\" --data-urlencode \"checksum="$CHECKSUM"\""

eval "curl ${POSTDATA} $URL"

Save this file to and make executable.  You can run it with no parameters to get a help menu, that should be self-explanatory.  You can either provide the API Key and Secret Key on the command-line or fill in the script to contain it.  The script will provide you with the monitorId if you forget yours, but you will have to know the tag name you gave to your Custom Monitor when you created it.  Therefore, either your tag or your monitorId is required to run this script.

Capturing Information on Login

Now that we have a script to send data to the Custom Monitor, we need to have data to send.  This script could easily be run from .bashrc or /etc/bashrc – and that would work fine, if we knew that no user would be deleting their .bashrc.  Since we cannot guarantee that, we will use PAM (Pluggable Authentication Module) to control how and when we send information to the Custom Monitor.  Since no user without root access will be able to alter PAM, this is a secure way to guarantee login information.  Also since sshd, sftp, ftp, and most other programs utilize PAM for authentication, this will monitor all logins to the server, not just shell logins.

PAM offers many options and modules, we will be utilizing a module called pam_script.  pam_script allows you to execute a script on session open, session close, and/or on auth.  You must download and install pam_script first:

wget '' -O libpam-script.tar.gz
tar -xzvf libpam-script.tar.gz
cd libpam-script-x.x.x #x.x.x is the version that you just download, apparent from tar output
sudo cp /lib/security/
sudo chown root:root /lib/security/
sudo chmod 755 /lib/security/

pam_script is now installed, but not configured.  There are three files associated with pam_script, /etc/security/onsessionopen /etc/security/onsessionclose /etc/security/onauth  The first two files will work on a session and the last will work for a successful auth.  Since we want to monitor successful auths, we will create the onauth file:

# onauth for Monitis Custom Login Monitor

/etc/security/ -m loginMonitor -r "user_login:$USER;host:$HOST;srv:$SERVICE"

This script will require that you move the script to /etc/security and make it and the onauth script executable by root and owned by root:

sudo mv /etc/security
sudo chmod 700 /etc/security/
sudo chown root:root /etc/security/
sudo chmod 700 /etc/security/onauth
sudo chown root:root /etc/security/onauth

Now we need to set PAM to utilize the pam_script module.  Depending on your system this will vary, but you need to edit the /etc/pam.d/common-auth file or something similar on your system.  You should add the following line:

# require the scripts to run at auth
auth required  runas=root expose=rhost

Here we are telling module to run as root and to expose the rhost variable, which will contain the remote host information that we utilize in the above script with the $PAM_RHOST variable

Testing the Monitor

Now we have a setup that will log all usernames, remote hosts, and service that they logged in from to our Custom Monitor.  Give it a try, ssh to your machine several times.  You will see the values appear in your account’s Custom Monitor.


Read the original blog entry...

More Stories By Hovhannes Avoyan

Hovhannes Avoyan is the CEO of PicsArt, Inc.,

@ThingsExpo Stories
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Day 2 Keynote at 17th Cloud Expo, Sandy Carter, IBM General Manager Cloud Ecosystem and Developers, and a Social Business Evangelist, wil...
PubNub has announced the release of BLOCKS, a set of customizable microservices that give developers a simple way to add code and deploy features for realtime apps.PubNub BLOCKS executes business logic directly on the data streaming through PubNub’s network without splitting it off to an intermediary server controlled by the customer. This revolutionary approach streamlines app development, reduces endpoint-to-endpoint latency, and allows apps to better leverage the enormous scalability of PubNub’s Data Stream Network.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true change and transformation possible.
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" in this scenario: microservice A (releases daily) depends on a couple of additions to backend B (re...
I recently attended and was a speaker at the 4th International Internet of @ThingsExpo at the Santa Clara Convention Center. I also had the opportunity to attend this event last year and I wrote a blog from that show talking about how the “Enterprise Impact of IoT” was a key theme of last year’s show. I was curious to see if the same theme would still resonate 365 days later and what, if any, changes I would see in the content presented.
Apps and devices shouldn't stop working when there's limited or no network connectivity. Learn how to bring data stored in a cloud database to the edge of the network (and back again) whenever an Internet connection is available. In his session at 17th Cloud Expo, Ben Perlmutter, a Sales Engineer with IBM Cloudant, demonstrated techniques for replicating cloud databases with devices in order to build offline-first mobile or Internet of Things (IoT) apps that can provide a better, faster user experience, both offline and online. The focus of this talk was on IBM Cloudant, Apache CouchDB, and ...
Container technology is shaping the future of DevOps and it’s also changing the way organizations think about application development. With the rise of mobile applications in the enterprise, businesses are abandoning year-long development cycles and embracing technologies that enable rapid development and continuous deployment of apps. In his session at DevOps Summit, Kurt Collins, Developer Evangelist at, examined how Docker has evolved into a highly effective tool for application delivery by allowing increasingly popular Mobile Backend-as-a-Service (mBaaS) platforms to quickly crea...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York and Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound cha...
Internet of @ThingsExpo, taking place June 7-9, 2016 at Javits Center, New York City and Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 18th International @CloudExpo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo New York Call for Papers is now open.
The cloud. Like a comic book superhero, there seems to be no problem it can’t fix or cost it can’t slash. Yet making the transition is not always easy and production environments are still largely on premise. Taking some practical and sensible steps to reduce risk can also help provide a basis for a successful cloud transition. A plethora of surveys from the likes of IDG and Gartner show that more than 70 percent of enterprises have deployed at least one or more cloud application or workload. Yet a closer inspection at the data reveals less than half of these cloud projects involve production...
Cloud computing delivers on-demand resources that provide businesses with flexibility and cost-savings. The challenge in moving workloads to the cloud has been the cost and complexity of ensuring the initial and ongoing security and regulatory (PCI, HIPAA, FFIEC) compliance across private and public clouds. Manual security compliance is slow, prone to human error, and represents over 50% of the cost of managing cloud applications. Determining how to automate cloud security compliance is critical to maintaining positive ROI. Raxak Protect is an automated security compliance SaaS platform and ma...
In his keynote at @ThingsExpo, Chris Matthieu, Director of IoT Engineering at Citrix and co-founder and CTO of Octoblu, focused on building an IoT platform and company. He provided a behind-the-scenes look at Octoblu’s platform, business, and pivots along the way (including the Citrix acquisition of Octoblu).
There are over 120 breakout sessions in all, with Keynotes, General Sessions, and Power Panels adding to three days of incredibly rich presentations and content. Join @ThingsExpo conference chair Roger Strukhoff (@IoT2040), June 7-9, 2016 in New York City, for three days of intense 'Internet of Things' discussion and focus, including Big Data's indespensable role in IoT, Smart Grids and Industrial Internet of Things, Wearables and Consumer IoT, as well as (new) IoT's use in Vertical Markets.
Today air travel is a minefield of delays, hassles and customer disappointment. Airlines struggle to revitalize the experience. GE and M2Mi will demonstrate practical examples of how IoT solutions are helping airlines bring back personalization, reduce trip time and improve reliability. In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect with GE, and Dr. Sarah Cooper, M2Mi’s VP Business Development and Engineering, explored the IoT cloud-based platform technologies driving this change including privacy controls, data transparency and integration of real time context with p...
The Internet of Things (IoT) is growing rapidly by extending current technologies, products and networks. By 2020, Cisco estimates there will be 50 billion connected devices. Gartner has forecast revenues of over $300 billion, just to IoT suppliers. Now is the time to figure out how you’ll make money – not just create innovative products. With hundreds of new products and companies jumping into the IoT fray every month, there’s no shortage of innovation. Despite this, McKinsey/VisionMobile data shows "less than 10 percent of IoT developers are making enough to support a reasonably sized team....
We all know that data growth is exploding and storage budgets are shrinking. Instead of showing you charts on about how much data there is, in his General Session at 17th Cloud Expo, Scott Cleland, Senior Director of Product Marketing at HGST, showed how to capture all of your data in one place. After you have your data under control, you can then analyze it in one place, saving time and resources.
Just over a week ago I received a long and loud sustained applause for a presentation I delivered at this year’s Cloud Expo in Santa Clara. I was extremely pleased with the turnout and had some very good conversations with many of the attendees. Over the next few days I had many more meaningful conversations and was not only happy with the results but also learned a few new things. Here is everything I learned in those three days distilled into three short points.
As organizations realize the scope of the Internet of Things, gaining key insights from Big Data, through the use of advanced analytics, becomes crucial. However, IoT also creates the need for petabyte scale storage of data from millions of devices. A new type of Storage is required which seamlessly integrates robust data analytics with massive scale. These storage systems will act as “smart systems” provide in-place analytics that speed discovery and enable businesses to quickly derive meaningful and actionable insights. In his session at @ThingsExpo, Paul Turner, Chief Marketing Officer at...
DevOps is about increasing efficiency, but nothing is more inefficient than building the same application twice. However, this is a routine occurrence with enterprise applications that need both a rich desktop web interface and strong mobile support. With recent technological advances from Isomorphic Software and others, rich desktop and tuned mobile experiences can now be created with a single codebase – without compromising functionality, performance or usability. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, demonstrated examples of com...
In his General Session at 17th Cloud Expo, Bruce Swann, Senior Product Marketing Manager for Adobe Campaign, explored the key ingredients of cross-channel marketing in a digital world. Learn how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects).