Saturday, 28 January 2017

Best options for using SSH within scripts

As system administrators we frequently write scripts involving authenticating to more than one server to accomplish tasks within a single script. We want our scripts to run fast but when working with ssh within scripts, we may encounter a very slow execution time in case there are hosts on which some password less authentication setup does not exist or the server is unresponsive.
For example if a server is down & we are trying to connect to it within our script then by default ssh will continue to make connection attempts for 60 seconds before giving up & move to the next server. Take another example wherein you get prompted for password while logging into the server. The script will remain hung until you don't enter the password.

In this article I'd like to share some ssh options that I use within my scripts for quick execution.

StrictHostKeyChecking:
With this option set to no, destination host key checking is turned off so we don't get prompted to accept the key fingerprint while connecting.

BatchMode:
With this option set to yes, ssh will effectively skip any hosts for which password less ssh connectivity is not set up. You can always redirect the hostnames you were unable to connect to another file.

ConnectTimeout:
This option determines the time in seconds for which ssh will continue to initiate a connection to a host. By reducing this value to under 10 seconds, we can reduce the waiting time before moving on to another host. You should redirect the host names you were unable to connect to another file to check them later.

q(quite):
By specifying quite mode, we can effectively skip motd & banner messages seen on the terminal when we log in thereby making room for a more cleaner output.

Usage in scripts:

We can specify the above mentioned options with our ssh command. But I've realized with experience that a better way is to variableize the options so that if we have multiple ssh commands within the script then we don't have to specify the options every time.

Here is an example of how I used these options in a script:


SSH_OPTIONS=" -o StrictHostKeyChecking=no -o BatchMode=yes -o ConnectTimeout=3 -q"


ssh $SSH_OPTIONS ${name} "bash -s" < /export/home/ssuri/automount_check_solaris.sh


i disabled host key checking, set batch mode to yes, reduced connection timeout to a mere 3 seconds & finally specified -q to signify that I wanted a quite ssh login.

I assigned all the ssh options I wanted to use in a variable & then used that variable with the ssh command to connect to some servers, the names of which were in a list looped over in a for loop & run a local script remotely by using "bash -s".

Thursday, 26 January 2017

Perl script to check valid email addresses

Confirming if an email address is valid or not is a common practice in various website forms we fill out. In this article I'd like to share a quick perl script which checks if an entered email address is valid or not. We'll be using grouping along with regular expressions.

An email address should basically comprise of an alphanumeric string with the @ and . characters in it to signify the email address provider & top level domain name. The script I'll share will check for these conditions.

Given below is the script:

[root@alive ~]# cat hello.pl
#!/usr/bin/perl

use warnings;
use strict;

sub main {
        my @add = ("sa789\@gmail.com",
                   "john22.com",
                   "bond24\@yahoo.com",
                   "bond24\@yahoo");

        while (my $adr = <@add>) {
                if ($adr =~ /(\w+\@\w+\.\w+)/) { print "$1 \n\n" ; }
                }

        }

main();


When I run the script, the valid email addresses are printed out.

[root@alive ~]# ./hello.pl
sa789@gmail.com

bond24@yahoo.com


Let's describe the condition responsible for filtering the email addresses i.e. (\w+\@\w+\.\w+)
The brackets () indicate a grouping. The \w indicates the presence of an alphanumeric character. The plus symbol + indicates the presence of one or more such alphanumeric characters. The @ symbol represents the @ symbol in the email addresses and has to be escaped by a backslash \ sine we want to use the literal meaning of the symbol. In a similar fashion we escape the dot . symbol as well by a backslash.
Finally, we print the result of the group regular expression match by printing $1.

I hope this article has been an informative read & might give ideas for more use cases on regular expressions.

Tuesday, 24 January 2017

Using AWK for column insertion within a text file

At times we may come across a requirement for inserting a column or maybe an entire file within another file. If the requirement is to merge the two files such that the columns of the second file commence right after the last column of the first file, then we can easily accomplish that using paste or join commands. But the task becomes tricky if the files are of different length columns and do not have a common column among them.
This article gives a quick demo about how we can use awk to add a new column/file to an existing file.

So, we have two files f1 & f2.

[root@alive ~]# cat f1
test1 testA
test2 testB
test3 testC
test4 testD
[root@alive ~]# cat f2
test7 testE
test8 testF
test9 testG
test10 testH


We want to merge file f2 with file f1 such that we would basically be adding two new columns to the file f1.

Here is the awk code to accomplish this task:

 awk '{getline new_col < "f2"} {print $0, new_col}' f1

The resulting output is as follows:

test1 testA test7 testE
test2 testB test8 testF
test3 testC test9 testG
test4 testD test10 testH


In the above example I used $0 with the awk print statement to print all columns of file f1 first followed by those belonging to file f2. We could've easily inserted the columns belonging to file f2 into some individual columns of file f1 by using individual column numbers instead of $0 in the print statement. Here's an example:

 awk '{getline new_col < "f2"} {print $1, new_col, $2}' f1

This awk statement will print the first column of file f1 first followed by the two columns comprised in file f2 and then the second column from file f1.

test1 test7 testE testA
test2 test8 testF testB
test3 test9 testG testC
test4 test10 testH testD

I hope this article was helpful and will definitely try to keep posting more tips & tricks like this in the future.

Sunday, 22 January 2017

Introduction to perl LWP

In this article I'd like demonstrate a brief overview of the usage of LWP module provided by perl.
The libwww-perl collection is a set of Perl modules which provides a simple and consistent application programming interface (API) to the World-Wide Web.
The main focus of the library is to provide classes and functions that allow you to write WWW clients.

I'v read that the module should be available out of the box in my perl distribution.
Bu it wasn't available in my centOS 7 box. So I installed via via yum.

yum install *perl-LWP* -y

With that done let's get to the code.

The first example is to print the html source code a web page.

[root@alive ~]# cat web.pl
#!/usr/bin/perl -w
#
use strict;
use LWP::Simple;

print get("https://www.4shared.com/");


The module used is LWP::Simple. The get function gets the source code of the web page & the print function prints it out to STDOUT.
We get the below output when we run the program:

[root@alive ~]# ./web.pl | more
Wide character in print at ./web.pl line 6.
<!DOCTYPE html>
<html>
<head>
  <meta charset="utf-8">
  <title>4shared.com - free file sharing and storage</title>
  <meta name="Description" content="Online file sharing and storage - 15 GB free web space. Easy registration. File upload progressor. Multiple file transfer. Fast download.">
  <meta name="Keywords" content="file sharing, free web space, online storage, share files, photo image music mp3 video sharing, dedicated hosting, enterprise sharing, file transfer, file hosting, internet file sharing">
  <meta name="google-site-verification" content="TAHHq_0Z0qBcUDZV7Tcq0Qr_Rozut_akWgbrOLJnuVo"/>
  <meta name="google-site-verification" content="1pukuwcL35yu6lXh5AspbjLpwdedmky96QY43zOq89E" />
  <meta name="google-site-verification" content="maZ1VodhpXzvdfDpx-2KGAD03FyFGkd7b7H9HAiaYOU" />
  <meta name="viewport" content="width=device-width, initial-scale=1" />
  <meta content="IE=edge" http-equiv="X-UA-Compatible">

  <meta property="og:title" content="4shared - free file sharing and storage"/>
<meta property="og:description" content="4shared is a perfect place to store your pictures, documents, videos and files, so you can share them with friends, family, and the world. Claim your free 15GB now!"/>
<link rel="stylesheet" type="text/css" href="https://static.4shared.com/css/common_n.4min.css?ver=2118177915"/>
<link rel="stylesheet" type="text/css" href="https://static.4shared.com/css/ui/elements.4min.css?ver=1246632214"/>
<link rel="stylesheet" type="text/css" href="https://static.4shared.com/auth-popup.4min.css?ver=-2080519390"/>
<link rel="stylesheet" type="text/css" href="https://static.4shared.com/css/themes/account/icons.4min.css?ver=-1551370407"/>
<link rel="stylesheet" type="text/css" href="https://static.4shared.com/css/tipTip.4min.css?ver=-207359769"/>
<script type="text/javascript" src="https://static.4shared.com/js/jquery/jquery-1.9.1.4min.js?ver=-885436651"></script>
<script type="text/javascript" src="https://static.4shared.com/js/jquery/jquery-migrate-1.2.1.4min.js?ver=1171340321"></script>
<script type="text/javascript">
--------------------------------------------
-------------------------------------------- Output truncated for brevity


In the next example we'll download the web page as an html document using the getstore function.
Here's the code:

#!/usr/bin/perl -w
#
use strict;
use LWP::Simple;

#print get("https://www.4shared.com/");

getstore("http://hammersoftware.ca/custom-programming/", "lwptest.html");


This will download the source code of the web page & save it as an html documnet named lwptest.html in the current working directory of the script.

[root@alive ~]# pwd
/root
[root@alive ~]# ls -l web.pl
-rwxr-xr-x. 1 root root 166 Jan 22 03:33 web.pl
[root@alive ~]# ls -l lwptest.html
-rw-r--r--. 1 root root 37789 Jan 22 03:33 lwptest.html
[root@alive ~]# file lwptest.html
lwptest.html: HTML document, UTF-8 Unicode text, with very long lines, with CRLF, LF line terminators
[root@alive ~]#

Along similar lines as the above example, in this final demo we'll download an image from a web site into the current working directory of the script.

#!/usr/bin/perl -w
#
use strict;
use LWP::Simple;

#print get("https://www.4shared.com/");

getstore("http://hammersoftware.ca/wp-content/uploads/2015/03/Perl-logo.jpg", "Logo.jpg");


The execution of the above code results in the download of the image from the said URL with the name perlLogo.jpg.

[root@alive ~]# pwd;ls -l web.pl ;ls -l Logo.jpg ; file Logo.jpg
/root
-rwxr-xr-x. 1 root root 183 Jan 22 03:43 web.pl
-rw-r--r--. 1 root root 50630 Jan 22 03:43 Logo.jpg
Logo.jpg: JPEG image data, JFIF standard 1.01


Just a quick note here. Specifying the name for the web page or the image you are downloading via the getstore function is not optional.
Your code will throw an error if you don't specify a name.

Sunday, 15 January 2017

Using AWK to match columns from multiple files

I came across an interesting requirement at a facebook forum today. The requirement was to match columns two & three of one file with columns one & two of another file & print the entities from the first file which do not have nay matches.

Here are the files:

[root@alive ~]# cat file1
d1,40,gold
d2,30,silver
d3,20,bronze
d4,10,iron
d5,5,wood
d6,20,gold
d7,10,wood
d8,5,gold
d9,10,silver
[root@alive ~]# cat file2
gold,40
silver,30
bronze,20
iron,10
wood,5

The AWK one liner that works is:

awk -F',' 'NR==FNR{c[$1$2]++;next};!c[$3$2]' file2 file1

The following will be the output of he above AWK statement:

d6,20,gold
d7,10,wood
d8,5,gold
d9,10,silver


Now, let's do a step by step breakdown of what just happened.

-F ',' 
(The file columns are comma separated. So we changed the field separator)

NR==FNR 
(When you have two input files to awk, FNR will reset back to 1on the first line of the next file whereas NR will continuing increment from where it left off. By checking FNR==NR we are essentially checking to see if we are currently parsing the first file.)
 
c[$1$2]++;next
(Here we are parsing the first file file2 creating an associative array with columns one & two & post increment the value by one. The next command tells AWK not to process any further commands and proceed to next record.)

!c[$3$2] 
(This line only executes when FNR==NR is false, i.e. we are not parsing file2 and thus must be parsing file1. We then use the first fields $2 and $3 of file1 as the key to index into our 'seen' list created earlier.  If the value returned is 0 it means we didn't see it in file1 and therefore we should print this line. Conversely, if the value is non-zero then we did see it in file1 and thus we should not print its value. Note that !c[$2$3] is equivalent to !a[$2$3]{print} because the default action when one is not given is to print the entire line.)

In case the requirement changes and we needed to print the matching lines instead of those that didn't match, we'd modify our AWK expression as follows:


awk -F',' 'NR==FNR{c[$1$2]++;next};c[$3$2]' file2 file1

And the resulting output will be:

d1,40,gold
d2,30,silver
d3,20,bronze
d4,10,iron
d5,5,wood

I hope this has been an interesting AWK read.

Friday, 13 January 2017

Script to run a local script remotely only if remote server is reachable

Today I'd like to share a script with you all which all run a local script on a remote server only if the server is reachable via ping. In linux we can count the number of pings we want to send to the server & we'll get an exit status almost immediately. In Solaris we do not have the option of sending a count & if we wait for the default value of 20 seconds for a ping to time out then we'll end up with a very slow script. The work around for this to change the default time out value. The script I use in this article has a ping timeout value of 2 seconds. The remote execution of the local script is carried out by using "bash -s" in the ssh session.

Here is the script:

#!/usr/bin/bash

################################################################
#Purpose: run automount check script on Linux/Solaris servers  #
#date: 12/01/2017                                              #
################################################################

host_list=${1}

##cleaan error server list##
>/export/home/`whoami`/ping_error.txt
>/export/home/`whoami`/automount_hung.txt

##check that server list exists##

if [ $# != 1 ] ; then
        echo "script usage: $0 <server list>"
        exit
fi

for name in `cat $host_list`
do

##check that host is reachable##

ping ${name} 2 &> /dev/null

if [ $? -eq 0 ] ; then
        OS_TYPE=$(ssh -o StrictHostKeyChecking=no -q ${name} 'uname -s')



        if [ $OS_TYPE == "Linux" ] ; then

                ##check if automount is hung##
                AUTO_HUNG="ps -eLo pid,pgrp,lwp,comm,wchan|grep autofs4_wait | grep -v automount"
                ssh ${name} ${AUTO_HUNG} >> /dev/null
                        if [ $? -eq 1 ] ; then
                                ssh -o StrictHostKeyChecking=no -q ${name}  "bash -s" < /export/home/ssuri/automount_check.sh
                        else
                                echo "autoFS service is hung on server ${name}"
                                echo "autoFS service is hung on server ${name}" >>  /export/home/`whoami`/automount_hung.txt
                        fi

        else
                ssh -o StrictHostKeyChecking=no -q ${name} "bash -s" < /export/home/ssuri/automount_check_solaris.sh
        fi
else
        echo "could not connect to server ${name}"
        echo "could not connect to server ${name}" >> /export/home/`whoami`/ping_error.txt
fi

echo -e "------------------------------- \n"

done

echo -e "------------------------------- \n\n"
echo "list of unreachable servers /export/home/`whoami`/ping_error.txt"
echo "list of servers where automount was hung is /export/home/`whoami`/automount_hung.txt"


The script being run remotely automount_check, checks for NFS and auto mount file systems to make sure that they are & can be mounted correctly & in read/write mode. Here is the script:

#!/bin/bash

##############################################
#Purpose: check for automount file systmes.  #
#date: 12/01/2017                            #
##############################################

##echo color codes##
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
RED='\033[0;31m'
NC='\033[0m'


echo -e  "$YELLOW Checking automount file systems on server $(hostname) $NC"

##get automount map file names##

cat /etc/auto.master| grep ^/ | egrep -v '/etc/auto.common|/etc/auto.unix|/etc/auto.ots|/etc/auto.export_home|/etc/auto.misc' | awk '{print $2}' >> /tmp/map_list


for i in `cat /tmp/map_list`; do cat $i | awk '{print $1}'; done >> /tmp/mount_list
cat /etc/fstab | grep nfs | awk '{print $2}' >> /tmp/mount_list
#cat /etc/vfstab | grep nfs | awk '{print $3}' >> /tmp/mount_list

##loop through autmount maps and nfs file systems in fstab file ##

for fs_name in `cat /tmp/mount_list`
do

#FS_name=$(cat $map_name | awk '{print $1}')

echo "checking file system: $fs_name"
cd $fs_name

##make sure autoFS mounts as nfs or cifs##

FS_type=$(df -hT . | awk 'NR==3 {print $1}')

if [ $FS_type == "nfs" ] || [ $FS_type == "cifs" ] ; then
        echo -e $GREEN"$fs_name is of type $FS_type and is  mounted correctly$NC"
else
        echo -e "$RED $fs_name is mounted correctly on $(hostname). Please check $NC"
fi

##check that FS is writable##

sudo touch testfile
if [ $? -eq 0 ] ; then
        echo -e "$GREEN $fs_name mounted on $(hostname) is writable $NC"
else
        echo -e "$RED $fs_name is not writable on $(hostname). Please check $NC"
fi
sudo rm testfile

cd

done

rm /tmp/map_list
rm /tmp/mount_list


I wrote it for doing some post checks after a network activity involving NAS shares just to make sure that everything was normal after the completion of the activity.

Tuesday, 10 January 2017

Common file test operations in Perl

Performing an action based on the existence or contents of a file or directory is fairly common in shell scripting. In this article, we explore some of the tests that we can run against a file/directory using the if conditional statement.

1. Check for existence of file (-e).

The following little program checks if a file named testfile exists & prints the name else it prints the error message stored in $! variable.

#!/usr/bin/perl -w
#
use strict ;

my $file_name = "/root/testfile";

if (-e $file_name) {
        print "file name is $file_name \n" ;
}
else {
        print "$file_name $! \n" ;
}

When we execute the script we get the following result:

[root@alive ~]# ./filetest.pl
file name is /root/testfile

Now if I change the file name to testfile1, the output changes.

[root@alive ~]# ./filetest.pl
/root/testfile1 No such file or directory


2. Check if file is writable (-w).


3. Check if file is empty (-z):

The file testfile is not empty, so if I run the following script:

#!/usr/bin/perl -w
#
use strict ;

my $file_name = "/root/testfile";

if (-z $file_name) {
        print "file name is $file_name \n" ;
}
else {
        print "$file_name not empty $! \n" ;
}

I'll get this result:

[root@alive ~]# ./filetest.pl
/root/testfile not empty
[root@alive ~]#

We can run negate tests as well, For example if I wanted a true result from if statement if the file was not empty, I could precede -z by not to indicate a negative match as shown below:

#!/usr/bin/perl -w
#
use strict ;

my $file_name = "/root/testfile";

if (not -z $file_name) {
        print "file name is $file_name \n" ;
}
else {
        print "$file_name not empty $! \n" ;

The result of this scripts' execution would be:

[root@alive ~]# ./filetest.pl
file name is /root/testfile


4. Check if the file is a plain text file (-f).
This will check if the file is a text file or a special file like a device file or an executable file.


5. Check that the file exists and is of non-zero size (-s).
This can be regarded as somewhat the opposite of the -z option.


6. Check if the file is a directory (-d).


There are many more file test operations available which can be looked up from perldoc. 

Automating telnet prompts in shell scripts


Telnet is a common tool we use to confirm if connectivity on a particular port is working or not.
However, since it's an interactive program it's difficult to use it in a script.
Since you have to press ctrl+] followed by quit every time you exit the telnet prompt.

I found a neat little trick to get around this using echo command.
If we pipe the word quit to our telnet test then the telnet prompt exits automatically once the test concludes.

[ssuri@:~] $ echo "quit" | telnet localhost 25
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection to localhost closed by foreign host.


This can prove to be very helpful while writing scripts.
For example, if want a particular action to be performed if connectivity exists & the code should exit otherwise.
One thing I couldn't get around is that "Connection to localhost closed by foreign host." printed no matter what text filter I tried.
So, I finally did a workaround by redirecting the telnet test output to a file & then running an if condition on the file.

Given below is the example:


#!/usr/bin/bash

VAR1=$(echo "quit" | telnet localhost 25 | awk 'NR==2 {print $1}')

echo  $VAR1 > /tmp/sa

VAR2=$(cat /tmp/sa)

if [ $VAR2 == "Connected" ] ; then
        echo "connectivity exists"
fi

rm /tmp/sa


This script provides the following output when executed.

[ssuri@:~] $ bash def.sh
Connection to localhost closed by foreign host.
connectivity exists

Exploring Here Documents in Perl

I find here documents as a very useful & interesting aspect of working with I/O.
A here document essentially allows us to feed static input arguments to an otherwise interactive program.

Here is a common example of using here documents with the cat command.

[ssuri@:~] $ cat > abc << EOF
> this is a
> a
> test file
> EOF
[ssuri@:~] $ cat abc
this is a
a
test file
[ssuri@:~] $

The syntax is the command followed by the << symbol & the limit string.
The limit string is used twice. Once before the << symbol & again at the end of input to infer that there is no further input to be entered beyond this point.


In this article I'd like to demonstrate using here documents in perl. I'll explore the concept using a simple script which will use the content generated by a here document assinged to a variable, copy it to a file & also print the content.
Here is the script.

#!/usr/bin/perl -w

use Fcntl;

open (FH1, "+> /tmp/afile") || die ;

my $heredoc = <<'END_MESSAGE' ;
server1
server2
server3
server4
server5
END_MESSAGE


print FH1 $heredoc ;

seek FH1, 0, 0;

my @herearray = <FH1> ;

foreach (@herearray) { print "this is server number $_ \n" ; }
[root@cent6 ~]#


In the above script we've used a here document to populate a variable named $heredoc. We then copy the contents of the variable to a file via the file hande FH1.
We then use the seek function to rewind or re-read the file afile via file handle FH1, assign it to an array & print it's contents.

The script when executed gives the following output.

[root@cent6 ~]# ./here.pl
this is server number server1

this is server number server2

this is server number server3

this is server number server4

this is server number server5


This is more of a proof of concept example.
We can use here documents for more advanced scripting.

Saturday, 7 January 2017

Using awk over a remote ssh connection

While trying to retrieve information from multiple servers in a script we may require formatting of the output via awk or sed. In this article I'll describe how we can use awk over ssh & some best practices while writing scripts involving lists of files.

As an example, I've taken up a requirement I recently had at work where I needed to check a list of servers for finger service & separate the servers on which service was enabled/disabled.

A couple of best practice tips:


  • When using files as input to for loops involving server names, use command line arguments to supply input files instead of hard coding the file name within the script. This enhances re-usability of the code.
  • Now that we are feeding the input via a command line argument, make sure that the user enters it while running the script & exit otherwise.
  • After ensuring that the user supplies the command line argument, put in another check to make sure that the supplied input is a non-empty file & exit otherwise.

So, with tips out of the way, let's see how we'll use awk over a remote ssh connection.
Generally when we run a command over ssh we include the commands withing singe quotes(''). But this won't work if we are using awk since it uses single quotes as well. Next if we need to print a particular column or field and we specify the usual $col, that won't work either because it'll get interpreted as a variable.

The fixes for the above mentioned problems are as follows:


  • Use double quotes for enclosing the commands to be run via ssh.
  • For specifying the field number with awk, use a backslash (\) to escape it.


Given below is the working script I wrote based on the above mentioned discussion:


#!/usr/bin/bash

##check that server list exists##

if [ $# != 1 ] ; then
        echo "script usage: $0 <server list>"
        exit

elif [ ! -s ${1} ] ; then

        echo "server list file not found. Exiting"
        exit
fi

##
        for name in `cat ${1}`
        do
        
        result=$(ssh -o StrictHostKeyChecking=no -q ${name} "svcs -a | grep finger | awk '{print \$1}' ")
        
        if [ ${result} == "disabled" ] ; then
                echo ${name} >> /export/home/ssuri/finger_disabled.txt
        elif [ ${result} == "online" ] ; then 
                echo ${name} >> /export/home/ssuri/finger_enabled.txt
        else 
                echo "could not get status of service on ${name}"
                echo "please log in and check"
        fi      
        done
        
        echo "list of servers on which finger is enabled is /export/home/ssuri/finger_enabled.txt"
        echo "list of servers on which finger is disabled is /export/home/ssuri/finger_disabled.txt"

Friday, 6 January 2017

Perl array modifications with splice, split & join

In this article I'll demonstrate modifying array contents using splice, split & join functions.
I've worked with one dimensional arrays for the sake of simplicity but the concepts would be valid for multidimensional arrays as well.

1.) Splice function:

This is used to remove a defined number of elements from an array beginning with a given offset & replacing the elements with other elements if specified.
The syntax is as follows:

splice @ARRAY, OFFSET [ , LENGTH [ , LIST ] ]

This function will remove the elements of @ARRAY designated by OFFSET and LENGTH and replaces them with LIST, if specified.


2.) Split function:

The split function is used to convert a set of strings into an array or contents of a file into a multidimentional array.
The syntax is as follows:

split [ pattern [ , expression [ , LIMIT ] ] ]

This function splits a string into an array. If LIMIT is specified the function splits into at most that number of fields.
If PATTERN is omitted, splits on whitespace.
I use the split function a lot for splitting files into indivudal columns.


3.) The join function:

The join function is somewhat the opposite of the split function. Like the split function splits a string into an array using a specified delimeter, the join function combines individual array elements into a string.
This also proves useful if you are trying to change a delimeter of contents in a file.

The syntax for join function is as follows:

join expression, list

This function joins the separate strings of LIST into a single string with fields separated by the value of expression and returns the string.


Here is a short script demonstrating the use of the above discussed functions:

[root@cent6 ~]# cat af.pl
#!/usr/bin/perl -w

use strict ;

##using splice function##

my @array = qw/linux solaris unix aix hpux/ ;

print "OS names before splicing: @array \n" ;

splice (@array, 0, 2, "debian", "ubuntu") ;

print "OS names after  splicing: @array \n" ;

##using split function##

my $var = "linux:solaris:unix:hp-ux:tru64" ;

my @string = split (':', $var) ;

#print each element on new line: foreach (@string) { print "$_ \n" ; }
print "@string \n" ;

##using join function##

my $var1 = "linux,solaris,unix,hp-ux,tru64" ;

my @str = split /,/, $var1 ;

my $jn = join '|', @str ;
print "$jn \n" ;

[root@cent6 ~]#

On execution the code yields the following output:

[root@cent6 ~]# ./af.pl
OS names before splicing: linux solaris unix aix hpux
OS names after  splicing: debian ubuntu unix aix hpux
linux solaris unix hp-ux tru64
linux|solaris|unix|hp-ux|tru64
[root@cent6 ~]#


There are many more functions associated with arrays available like push, pop, shift, unshift & sort. I might write articles describing the other functions at a later date.

A Ping check script in Perl

The title sounds simple. The script sounds like a mere one liner. To just ping a server ?
The task is simple but the way we execute it in the script I wish to share isn't simple.
I took the ping check as a basic example to demonstrate the concept.

I have a list of servers. I need to separate them. The list of servers which respond to a ping to go in one file & the servers which do not respond to a ping go in another file.
The list of servers which will be used as input will be supplied as a command line argument & read in the script via @ARGV.
The script will exit if not file is provided as a command line argument.

Here is the script:

[root@alive ~]# cat ssh.pl
#!/usr/bin/perl
#

$#ARGV += 1;
my $list = $ARGV[0] ;

if ($#ARGV == 0) {
        print "script usage: $0 \<server_list\> \n";
        exit ;
        }

open (FH1, "$list") || die "error in file $list : $!" ;
open (FH2, ">> host_alive") || die "errors $!" ;
open (FH3, ">> host_dead") || die "errors $!" ;

while (<FH1>) {
        chomp $_ ;
        my $output = `ping -c 1 $_ &> /dev/null`;
        print  "$output \n" ;
        if ($? == 0) {
                printf FH2 "$_ \n"  ;
                }
        else    {
                printf FH3 "$_ \n" ;
                }
        }

print "script complete \n" ;

I've put some host names in a file named server_list.

[root@alive ~]# cat server_list
test
alive
test123
alive

Now, lets test the script by typing: ./ssh.pl server_list
Once the script completes the content within my two output files is as follows:


[root@alive ~]# cat host_alive
alive
alive
[root@alive ~]# cat host_dead
test
test123
[root@alive ~]#


So, the concept I've tried to explain here is how we can use a given list of systems & classify a server based on it's output response to a command.
Many would argue that the script would be much simpler in bash but I've made an effort to appreciate the beauty in complexity.

Monday, 2 January 2017

A script to display a running counter



We may often come across situations while writing scripts wherein we need to wait for an event to complete or use the sleep command. It would be useful if you could display the amount of time remaining before the execution of the script would resume.

Given below is a short script that would display a running timer for 10 seconds using a simple while loop and the sleep command.

[root@cent6 ~]# cat count.sh
#!/bin/bash

i=10

echo "timer starting for $i seconds"

while [ $i -gt 0 ]
do
        echo -ne "\t $i \033[0K\r"
        sleep 1
        i=$[$i - 1]
done

echo "counter complete"


The real magic of the counter comes from the echo statement. Here is a description of what we've used in the echo statement:

-n will not output the trailing newline. So that saves me from going to a new line each time I echo something.

-e will allow me to interpret backslash escape symbols.

\033[OK represents an end of line which cleans the rest of line if there are any characters left from previous output 

\r is a carriage return which moves the cursor to the beginning of the line.

The output of the script is a cool neat timer that runs for 10 seconds.

Using capture groups in grep in Linux

Introduction Let me start by saying that this article isn't about capture groups in grep per se. What we are going to do here with gr...