Linux Containers Authors: Carmen Gonzalez, Pat Romanski, Elizabeth White, Liz McMillan, Yeshim Deniz

Related Topics: Linux Containers

Linux Containers: Blog Post

Create Linux User Login Monitor on Monitis

Monitis provides the ability to monitor almost any operation on your server

Monitis provides the ability to monitor almost any operation on your server.  Using simple Linux tools and scripts you are able to monitor each time a user logs into the server and capture various information, including username, host address and login service.  Using pam_script and bash scripts, you are able to transmit information to a Custom Monitor with this information.

API Access

The first thing you will need in order to create this monitor is the Monitis API Key and Secret Key.  The API Key is a alphanumeric code that allows you to access the Monitis API url’s and transmit or receive data about your Monitis services.  The Secret Key is an alphanumeric code that allows you to digitally sign your information to ensure that only you can transmit data to your Monitis account.  Your API Key may be disclosed to anyone, but your Secret Key must be maintained private and should not be shared nor transmitted.  To obtain your Monitis API Key and Secret Key, log into your account and from the top menu bar, go to Tools then API then API Key, it will display both your API Key and your Secret Key.

Now let’s test your API access.  You should be able to connect and get an Auth Token:

curl 'http://www.monitis.com/api?action=authToken&apikey=[API Key]&secretkey=[Secret Key]&version=2'

In the above command you should replace [API Key] and [Secret Key] with your API Key and Secret Key.  We are using curl in order to connect to http://www.monitis.com and access the API to get a Auth Token.  The return value is json and sends back something similar to:


Where the alphanumeric code will be your Auth Token.  You can use your Auth Token to validate against the API later.   However sending your Secret Key is not extremely secure, others could possibly  obtain your Secret Key this way.  The more secure method of authenticating is to send your data using POST instead of GET and using a Base64-encoded RFC 2104-compliant HMAC signature to sign the post data.  The signature is sent in the checksum parameter of the POST data.  To calculate the checksum you must follow these rules:
  1. sort all parameters alphabetically by name (excluding the checksum parameter)
  2. concat all parameter names and values like this: name1value1name2value2…
  3. create Base64-encoded RFC 2104-compliant HMAC signature using Secret Key

The final rule can be calculated using openssl:

echo -en “name1value1name2value2” | openssl dgst -sha1 -hmac [Secret Key] -binary | openssl enc -base64

Creating a Custom Monitor

In order to create a custom monitor, you must send a POST request to the API.  This POST request must contain several parameters: action, name, resultsParams, and tag (refer to http://monitis.com/api/api.html#addCustomMonitor for specifications).  We will use the following specifications for the params:

  • action=addMonitor
  • name=Login Monitor
  • resultsParam=user_login:Login Name:logins:3;host:Host Address:hostaddress:3;srv:Service:service:3
  • tag=loginMonitor

There is other necessary information in order to communicate with the API:

  • apikey=[API Key]
  • timestamp=[Current UTC time]
  • version=2
In order to create our new monitor called Login Monitor we would post this data plus a checksum to http://monitis.com/customMonitorApi which is the Custom Monitor API url.  Here is a simple script that will accomplish this:

# create a Custom Monitor for Monitis
# Be sure to modify the API Key and Secret Key
NAME="login monitor"
RESULTPARAMS="user_login:Login Name:logins:3;host:Host Address:hostaddress:3;srv:Service:service:3"
TIMESTAMP=`date -u +"%F %T"`
SECRETKEY="[Secret Key]"

# Create Checksum
CHECKSUM_STR="action"$ACTION"apikey"$APIKEY"name"$NAME"resultParams"$RESULTPARAMS"tag"$ TAG"timestamp"$TIMESTAMP"version"$VERSION
CHECKSUM=$(echo -en $CHECKSUM_STR | openssl dgst -sha1 -hmac $SECRETKEY -binary | openssl enc -base64 )

# Post Data to API
POSTDATA="--data-urlencode \"action="$ACTION"\" --data-urlencode \"apikey="$APIKEY"\" --data-urlencode \"name="$NAME"\" --data-urlencode \"resultParams="$RESULTPARAMS"\" --data-urlencode \"tag="$TAG"\" --data-urlencode \"timestamp=$TIMESTAMP\" --data-urlencode \"version="$VERSION"\" --data-urlencode \"checksum="$CHECKSUM"\""

eval "curl ${POSTDATA} $URL"

Save the above script into a file called monitis_create_monitor.sh, be sure not to change the order of the variables in the checksum calculation as they must be in alphabetical order.  Ensure to make this file executable:

chmod 755 monitis_create_monitor.sh

Now run it:

The output should look similar to this:


This is showing us that the monitor was successfully created and that the id of the resulting monitor is 305.  If you go to your Monitis account now, you will be able to access this monitor.  From the top level menu, go to Monitors then Manage Monitors and then Custom Monitors.  Here you should find the Login Monitor.  Click the check box next to the title and then click Add to Window.  A window will pop up below the Custom Monitors dialog box.  Close the Custom Monitors dialog box and you will see your new monitor there.  But no data has been sent to it, so it is not that interesting.

Sending Data to Custom Monitor

In order to send data to your Custom Monitor, you must provide the action, monitorId, checktime, and results (refer to http://monitis.com/api/api.html#addCustomMonitorResult for specifications).  The action is addResult, the monitorId is the id that was returned to us in the previous example (If you forgot the id, don’t worry we will get it back), the checktime is the timestamp of the results data, and the results is a string of the parameters and values in this format: name1value1;name2value2

The following script will send data to your Custom Monitor:

# add result to Custom Monitor for Monitis

cat << EOF
usage: $0 options

This script will add results to a Custom Monitis Monitor.

-h Show this message
-a api key
-s secret key
-m monitor tag
-i monitor id
-t timestamp (defaults to utc now)
-r results name:value[;name2:value2...]

CHECKTIME=`date -u +"%s"000`
TIMESTAMP=`date -u +"%F %T"`

while getopts "ha:s:m:i:t:r:s:" OPTION
case $OPTION in

exit 1

if [[ -z $APIKEY ]] || [[ -z $SECRETKEY ]] || [[ -z $MONITOR$ID ]] || [[ -z $RESULTS ]] || [[ -z $CHECKTIME ]]
exit 1

# Get id of monitor if not provided
if [[ -z $ID ]]
XMLID=$(curl -s "$URL?apikey=$APIKEY&output=$OUTPUT&version=$VERSION&action=getMonitors&tag=$MONITOR" | xpath -q -e /monitors/monitor/id)

# Add monitor result
# Create Checksum
CHECKSUM_STR="action"$ACTION"apikey"$APIKEY"checktime"$CHECKTIME"monitorId"$ID"results"$ RESULTS"timestamp"$TIMESTAMP"version"$VERSION
CHECKSUM=$(echo -en $CHECKSUM_STR | openssl dgst -sha1 -hmac $SECRETKEY -binary | openssl enc -base64 )
# Post Data to API

POSTDATA="--data-urlencode \"action="$ACTION"\" --data-urlencode \"apikey="$APIKEY"\" --data-urlencode \"checktime="$CHECKTIME"\" --data-urlencode \"monitorId="$ID"\" --data-urlencode \"results="$RESULTS"\" --data-urlencode \"timestamp=$TIMESTAMP\" --data-urlencode \"version="$VERSION"\" --data-urlencode \"checksum="$CHECKSUM"\""

eval "curl ${POSTDATA} $URL"

Save this file to monitis_add_result.sh and make executable.  You can run it with no parameters to get a help menu, that should be self-explanatory.  You can either provide the API Key and Secret Key on the command-line or fill in the script to contain it.  The script will provide you with the monitorId if you forget yours, but you will have to know the tag name you gave to your Custom Monitor when you created it.  Therefore, either your tag or your monitorId is required to run this script.

Capturing Information on Login

Now that we have a script to send data to the Custom Monitor, we need to have data to send.  This script could easily be run from .bashrc or /etc/bashrc – and that would work fine, if we knew that no user would be deleting their .bashrc.  Since we cannot guarantee that, we will use PAM (Pluggable Authentication Module) to control how and when we send information to the Custom Monitor.  Since no user without root access will be able to alter PAM, this is a secure way to guarantee login information.  Also since sshd, sftp, ftp, and most other programs utilize PAM for authentication, this will monitor all logins to the server, not just shell logins.

PAM offers many options and modules, we will be utilizing a module called pam_script.  pam_script allows you to execute a script on session open, session close, and/or on auth.  You must download and install pam_script first:

wget 'http://freshmeat.net/urls/47ddad89e38001dbe0dc50424e36987b' -O libpam-script.tar.gz
tar -xzvf libpam-script.tar.gz
cd libpam-script-x.x.x #x.x.x is the version that you just download, apparent from tar output
sudo cp pam_script.so /lib/security/
sudo chown root:root /lib/security/pam_script.so
sudo chmod 755 /lib/security/pam_script.so

pam_script is now installed, but not configured.  There are three files associated with pam_script, /etc/security/onsessionopen /etc/security/onsessionclose /etc/security/onauth  The first two files will work on a session and the last will work for a successful auth.  Since we want to monitor successful auths, we will create the onauth file:

# onauth for Monitis Custom Login Monitor

/etc/security/monitis_add_data.sh -m loginMonitor -r "user_login:$USER;host:$HOST;srv:$SERVICE"

This script will require that you move the monitis_add_data.sh script to /etc/security and make it and the onauth script executable by root and owned by root:

sudo mv monitis_add_data.sh /etc/security
sudo chmod 700 /etc/security/monitis_add_data.sh
sudo chown root:root /etc/security/monitis_add_data.sh
sudo chmod 700 /etc/security/onauth
sudo chown root:root /etc/security/onauth

Now we need to set PAM to utilize the pam_script module.  Depending on your system this will vary, but you need to edit the /etc/pam.d/common-auth file or something similar on your system.  You should add the following line:

# require the scripts to run at auth
auth required   pam_script.so  runas=root expose=rhost

Here we are telling module to run as root and to expose the rhost variable, which will contain the remote host information that we utilize in the above script with the $PAM_RHOST variable

Testing the Monitor

Now we have a setup that will log all usernames, remote hosts, and service that they logged in from to our Custom Monitor.  Give it a try, ssh to your machine several times.  You will see the values appear in your account’s Custom Monitor.


Read the original blog entry...

More Stories By Hovhannes Avoyan

Hovhannes Avoyan is the CEO of PicsArt, Inc.,

@ThingsExpo Stories
More and more brands have jumped on the IoT bandwagon. We have an excess of wearables – activity trackers, smartwatches, smart glasses and sneakers, and more that track seemingly endless datapoints. However, most consumers have no idea what “IoT” means. Creating more wearables that track data shouldn't be the aim of brands; delivering meaningful, tangible relevance to their users should be. We're in a period in which the IoT pendulum is still swinging. Initially, it swung toward "smart for smar...
@ThingsExpo has been named the Top 5 Most Influential Internet of Things Brand by Onalytica in the ‘The Internet of Things Landscape 2015: Top 100 Individuals and Brands.' Onalytica analyzed Twitter conversations around the #IoT debate to uncover the most influential brands and individuals driving the conversation. Onalytica captured data from 56,224 users. The PageRank based methodology they use to extract influencers on a particular topic (tweets mentioning #InternetofThings or #IoT in this ...
SYS-CON Events announced today that SoftNet Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. SoftNet Solutions specializes in Enterprise Solutions for Hadoop and Big Data. It offers customers the most open, robust, and value-conscious portfolio of solutions, services, and tools for the shortest route to success with Big Data. The unique differentiator is the ability to architect and ...
SYS-CON Events announced today that Niagara Networks will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Niagara Networks offers the highest port-density systems, and the most complete Next-Generation Network Visibility systems including Network Packet Brokers, Bypass Switches, and Network TAPs.
SYS-CON Events announced today that Embotics, the cloud automation company, will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Embotics is the cloud automation company for IT organizations and service providers that need to improve provisioning or enable self-service capabilities. With a relentless focus on delivering a premier user experience and unmatched customer support, Embotics is the fas...
In an era of historic innovation fueled by unprecedented access to data and technology, the low cost and risk of entering new markets has leveled the playing field for business. Today, any ambitious innovator can easily introduce a new application or product that can reinvent business models and transform the client experience. In their Day 2 Keynote at 19th Cloud Expo, Mercer Rowe, IBM Vice President of Strategic Alliances, and Raejeanne Skillern, Intel Vice President of Data Center Group and ...
Virgil consists of an open-source encryption library, which implements Cryptographic Message Syntax (CMS) and Elliptic Curve Integrated Encryption Scheme (ECIES) (including RSA schema), a Key Management API, and a cloud-based Key Management Service (Virgil Keys). The Virgil Keys Service consists of a public key service and a private key escrow service. 

Data is the fuel that drives the machine learning algorithmic engines and ultimately provides the business value. In his session at Cloud Expo, Ed Featherston, a director and senior enterprise architect at Collaborative Consulting, will discuss the key considerations around quality, volume, timeliness, and pedigree that must be dealt with in order to properly fuel that engine.
Fifty billion connected devices and still no winning protocols standards. HTTP, WebSockets, MQTT, and CoAP seem to be leading in the IoT protocol race at the moment but many more protocols are getting introduced on a regular basis. Each protocol has its pros and cons depending on the nature of the communications. Does there really need to be only one protocol to rule them all? Of course not. In his session at @ThingsExpo, Chris Matthieu, co-founder and CTO of Octoblu, walk you through how Oct...
Fact is, enterprises have significant legacy voice infrastructure that’s costly to replace with pure IP solutions. How can we bring this analog infrastructure into our shiny new cloud applications? There are proven methods to bind both legacy voice applications and traditional PSTN audio into cloud-based applications and services at a carrier scale. Some of the most successful implementations leverage WebRTC, WebSockets, SIP and other open source technologies. In his session at @ThingsExpo, Da...
In past @ThingsExpo presentations, Joseph di Paolantonio has explored how various Internet of Things (IoT) and data management and analytics (DMA) solution spaces will come together as sensor analytics ecosystems. This year, in his session at @ThingsExpo, Joseph di Paolantonio from DataArchon, will be adding the numerous Transportation areas, from autonomous vehicles to “Uber for containers.” While IoT data in any one area of Transportation will have a huge impact in that area, combining sensor...
The Internet of Things (IoT), in all its myriad manifestations, has great potential. Much of that potential comes from the evolving data management and analytic (DMA) technologies and processes that allow us to gain insight from all of the IoT data that can be generated and gathered. This potential may never be met as those data sets are tied to specific industry verticals and single markets, with no clear way to use IoT data and sensor analytics to fulfill the hype being given the IoT today.
@ThingsExpo has been named the Top 5 Most Influential M2M Brand by Onalytica in the ‘Machine to Machine: Top 100 Influencers and Brands.' Onalytica analyzed the online debate on M2M by looking at over 85,000 tweets to provide the most influential individuals and brands that drive the discussion. According to Onalytica the "analysis showed a very engaged community with a lot of interactive tweets. The M2M discussion seems to be more fragmented and driven by some of the major brands present in the...
If you had a chance to enter on the ground level of the largest e-commerce market in the world – would you? China is the world’s most populated country with the second largest economy and the world’s fastest growing market. It is estimated that by 2018 the Chinese market will be reaching over $30 billion in gaming revenue alone. Admittedly for a foreign company, doing business in China can be challenging. Often changing laws, administrative regulations and the often inscrutable Chinese Interne...
SYS-CON Events announced today that Pulzze Systems will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Pulzze Systems, Inc. provides infrastructure products for the Internet of Things to enable any connected device and system to carry out matched operations without programming. For more information, visit http://www.pulzzesystems.com.
In the next forty months – just over three years – businesses will undergo extraordinary changes. The exponential growth of digitization and machine learning will see a step function change in how businesses create value, satisfy customers, and outperform their competition. In the next forty months companies will take the actions that will see them get to the next level of the game called Capitalism. Or they won’t – game over. The winners of today and tomorrow think differently, follow different...
One of biggest questions about Big Data is “How do we harness all that information for business use quickly and effectively?” Geographic Information Systems (GIS) or spatial technology is about more than making maps, but adding critical context and meaning to data of all types, coming from all different channels – even sensors. In his session at @ThingsExpo, William (Bill) Meehan, director of utility solutions for Esri, will take a closer look at the current state of spatial technology and ar...
The Open Connectivity Foundation (OCF), sponsor of the IoTivity open source project, and AllSeen Alliance, which provides the AllJoyn® open source IoT framework, today announced that the two organizations’ boards have approved a merger under the OCF name and bylaws. This merger will advance interoperability between connected devices from both groups, enabling the full operating potential of IoT and representing a significant step towards a connected ecosystem.
SYS-CON Media announced today that @WebRTCSummit Blog, the largest WebRTC resource in the world, has been launched. @WebRTCSummit Blog offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. @WebRTCSummit Blog can be bookmarked ▸ Here @WebRTCSummit conference site can be bookmarked ▸ Here
SYS-CON Events announced today that Streamlyzer will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Streamlyzer is a powerful analytics for video streaming service that enables video streaming providers to monitor and analyze QoE (Quality-of-Experience) from end-user devices in real time.