Wednesday, 30 November 2016

Run a command via sudo but as a different user


This sounds simple & it is as long as you are doing it on the command line & not inside a script.
Let's talk about the scenario first. Suppose I'm a user & my user name is sahil. I have sudo privileges to work as user testuser.

[sahil@centops ~]$ sudo -l
Matching Defaults entries for sahil on this host:
    !visiblepw, always_set_home, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR
    USERNAME LANG LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME
    LC_NUMERIC LC_PAPER LC_TELEPHONE", env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY",
    secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin

User sahil may run the following commands on this host:
    (root) NOPASSWD: /usr/bin/sudo su - testuser


I need to run a script involving a command that needs to be executed as test user. Sounds simple enough. Here's a mundane example:

[sahil@centops ~]$ cat test.sh
#!/bin/bash

echo "Script to test sudo privileges"

/usr/bin/sudo su - testuser
cp /home/testuser/file1 /home/testuser/file2

if [ $? -eq 0 ]
then
        echo "command was successful"
else
        echo "There seems to be a problem"
fi


So, that's a simple script to switch to testuser, copy a file & then confirm if the file was copied successfully. But when I run it it doesn't work as I intend it to. Here's the output of running the script in debug mode with -x option.

[sahil@centops ~]$ bash -x test.sh
+ echo 'Script to test sudo privileges'
Script to test sudo privileges
+ /usr/bin/sudo su - testuser
[testuser@centops ~]$ exit
logout
+ cp /home/testuser/file1 /home/testuser/file2
cp: accessing `/home/testuser/file2': Permission denied
+ '[' 1 -eq 0 ']'
+ echo 'There seems to be a problem'
There seems to be a problem

What happened is that when the sudo command ran, I switched from the user sahil to the testuser & got a new shell. The remaining commands in the script will be executed only after I exit the new shell I got as testuser. When I do so, the copy operation fails since I've logged back in as user sahil & he does not have the required privileges. So now that we've understood the problem, let's apply the fix.

To start things off, we need to edit the sudoers entry for the user sahil. It should like this is:

sahil ALL=(testuser:testuser)  ALL

What the above line says is that we'd like to allow the user sahil to be able run any command as testuser user & testuser group privileges & on all terminals.

The sudo -l output post this addition will look this:

[sahil@centops ~]$ sudo -l
Matching Defaults entries for sahil on this host:
    !visiblepw, always_set_home, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR
    USERNAME LANG LC_ADDRESS LC_CTYPE", env_keep+="LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES", env_keep+="LC_MONETARY LC_NAME
    LC_NUMERIC LC_PAPER LC_TELEPHONE", env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY",
    secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin

User sahil may run the following commands on this host:
    (testuser : testuser) ALL
[sahil@centops ~]$


I've modified our script as follows:

[sahil@centops ~]$ cat test.sh
#!/bin/bash

echo "Script to test sudo privileges"

sudo  -u testuser cp /home/testuser/file1 /home/testuser/file2

if [ $? -eq 0 ]
then
        echo "command was successful"
else
        echo "There seems to be a problem"
fi
[sahil@centops ~]$


Now let's execute it.

[sahil@centops ~]$ ./test.sh
Script to test sudo privileges
command was successful
[sahil@centops ~]$


Tuesday, 29 November 2016

Making sure running commands keep running


The title of the article may seem a bit misleading but this is actually about job control & how we can make sure that long running commands don't die out once we close our terminal windows.

So let's start with job control first. In linux if we want to run a command in the background we place a & symbol after it. For example

[root@centops ~]# sleep 5000 &
[1] 18250
[root@centops ~]#

As soon as we press enter we are returned to the prompt & the system gives us a job number (in this case 1) & a PID 18250.

[root@centops ~]# pgrep -fl sleep
18250 sleep 5000
[root@centops ~]#

So, what if we want to place a running job in the background in case we forgot to mention & when we executed the command. Well, that's easy. We just need to press ctrl+z to suspend the job & then use bg to start the job in background.

[root@centops ~]# sleep 7000
^Z
[3]+  Stopped                 sleep 7000
[root@centops ~]# bg %3
[3]+ sleep 7000 &
[root@centops ~]#

To view currently running/stopped jobs along with their PID, type jobs -l.

[root@centops ~]# jobs -l
[1]  18250 Running                 sleep 5000 &
[2]- 18297 Running                 sleep 6000 &
[3]+ 18300 Running                 sleep 7000 &
[root@centops ~]#

to bring a job to the foreground, type fg followed by % and then the job number. For example.

[root@centops ~]# fg %2
sleep 6000
^Z
[2]+  Stopped                 sleep 6000
[root@centops ~]# bg %2
[2]+ sleep 6000 &
[root@centops ~]#
[root@centops ~]# jobs -l
[1]  18250 Running                 sleep 5000 &
[2]- 18297 Running                 sleep 6000 &
[3]+ 18300 Running                 sleep 7000 &
[root@centops ~]#


In the above jobs -l output, the + symbol indicates the job that was sent to the background most recently. To terminate a job, bring the job to the foreground & press ctrl+c.

All the jobs that we have running here will be sent a hang up signal once we close the terminal & will be gone. The hang up signal tells the running jobs that their controlling terminal window has been terminated so the jobs should terminate themselves. 
Now let's look at preserving our jobs after closure of the terminal window. 

Use nohup:
nohup is a common way of preventing jobs from terminating once we close the terminal window. The syntax for starting a job using nohup in background is nohup <command> &.
For example,

[root@centops ~]# nohup sleep 600 &
[1] 18486
[root@centops ~]# nohup: ignoring input and appending output to `nohup.out'
[root@centops ~]#

The nohup command will send all it's output to a file named nohup.out. If the command is generating a lot of output then the size of the nohup.out file might be something to lookout for.


Use disown:
nohup is great as long as you remember to to type it before typing the command since nohup needs to proceed the command. What if we forgot to do that & still want to prevent our job from being killed off when we close the terminal. In that case we can use disown. disown works in a fashion similar to that of bg & fg in the sense that it accepts a % followed by a job number as it's input but it's functionality is totally different.
Disown removes a process from the shell's control. So if we disown the job, it'll keep running in the background but we would have foresighted control of it i.e. the job will no longer be visible in the jobs -l output & we won't be able to bring it back to the background if we need to. here's a demo.

[root@centops ~]#  sleep 600 &
[1] 18507
[root@centops ~]# jobs -l
[1]+ 18507 Running                 sleep 600 &
[root@centops ~]#
[root@centops ~]# disown %1
[root@centops ~]#
[root@centops ~]# jobs -l
[root@centops ~]#
[root@centops ~]# pgrep -fl sleep
18507 sleep 600
[root@centops ~]#

In the above example I started a sleep command in the background. I then disowned the job after which it was no longer visible in jobs -l output but when I checked the PID, I was able to find the command still running. Now if I closed my terminal window, the disowned job will continue to run in the background until it completes.


Use screen:
Disown is nice but the problem is that we can't foreground a disowned job in future in another terminal. Screen is an extremely versatile tool & what I'm illustrating here is a small part of the complete tool set that screen provides. So here's how we do it.
Type the screen command to start a screen session. Type the command you need to execute & press & to put it in the background for execution. Now we'll detach the screen. To do this, press ctrl+a followed by d.
Now open another terminal session & type screen -dr. This will re-attach the most recently detached screen i.e. the one we were working on before. To terminate a screen session, just press exit.

Monday, 28 November 2016

Pause an ssh session


This is a short read but most definitely a neat trick. While working on systems as part of daily routine we frequently traverse systems logging into one server, then another & back & so on & so forth. Time & again we come across a situation wherein we just logged into a server & remembered that we needed something from the source server & we go back to the source server & have to follow the login process completely again.
There is a neat trick which allows you to temporarily suspend a ssh connection, do your work on the source & log back in when you are done.
Here's the demo.

I'm logged into my source system centops as root user.


Now I login to another system cclient1 & i just remembered something that I need to check on system centops. Instead of logging out & logging back in, I'll suspend my ssh session by entering the following command sequence. ~ followed by ctrl+z.


Just a note here. The ~ character isn't visible on screen when we type it. We are able to view the complete command we typed post execution.
So, our ssh connection has been suspended & we are back to our source server. To make sure that the session still exists in suspended state we can use the jobs command & check the PID in the ps -ef output.


To resume the session all we have to do is to use the fg command followed by % & the jod id next to it.


Setting the password non-interactively in Linux using bash shell


Manually resetting passwords of a large chunk of users is a painfully boring task & is definitely not the best use of our time. In this article I'll share just a couple of methods of somewhat automate this process. So here we go!

Method 1: use chpasswd
Using chpasswd we can set or reset login passwords for many users non-interactively. WE just need to add the username & password in a text file in the form of a key-value pair & serve the resultant text file as input to chpasswd command & our work is done. Here's a demo:

I've created a user named testuser & I want to set its password to 123. So, I've added the key-value pair in a text file shown below:

[root@centops ~]# cat pass.txt
testuser:123
[root@centops ~]#

Now we just need to feed it to chpasswd.

[root@centops ~]# chpasswd <pass.txt


Method 2: use stdin
This is another simple method wherein we echo out the password to passwd <user name> command via --stdin. Here's an example:

[root@centops ~]# echo "456"  | passwd testuser --stdin
Changing password for user testuser.
passwd: all authentication tokens updated successfully.


Method 3: use expect
Expect is an awesome tool for supplying input to interactive programs to automate them. Here's the expected expect code to accomplish a non-interactive password reset:

#!/usr/bin/expect

set timeout 10

set user [lindex $argv 0]

set password [lindex $argv 1]

spawn passwd $user

expect "password:"
send "$password\r"
expect "password:"
send "$password\r"

expect eof


The test looks like this:

# ./e3.sh testuser 123

spawn passwd testuser
Changing password for user testuser.
New password:
BAD PASSWORD: it is WAY too short
BAD PASSWORD: is too simple
Retype new password:
passwd: all authentication tokens updated successfully.

Bash shell arguments


We frequently use arguments with the commands that we type on the command line interface. For example in the command $ls -l /tmp, ls is the command, l is the option & /tmp is the argument. We can use some arguments with our bash shell scripts as well to influence the behavior of the code as per our requirements.


$0
The first shell argument we look at is $0. This represents the name of the file/script which is being run. To illustrate I wrote a small script bargs.sh with the following content:

[root@centops ~]# cat bargs.sh
#!/bin/bash

echo "The name of the script is $0"

When I run the script, the name of the script gets substituted as the value of argument $0.

[root@centops ~]# bargs.sh
The name of the script is /root/bargs.sh
[root@centops ~]# ./bargs.sh
The name of the script is ./bargs.sh


Positional parameters
These arguments are the ones we type after the name of the script separated by spaces. They are $1, $2 & so on. Just a note, in case you are using more than nine arguments with the script then the 10th & subsequent arguments need to be represented by ${10} & so on, In case you write $10, the shell will interpret it as the value of $1 & append a zero to that value while printing.
I've modified the bargs.sh script used earlier to include the arguments $1 & $2.

[root@centops ~]# cat bargs.sh
#!/bin/bash

echo "The name of the script is $0"

echo "The first argument is $1"

echo "The second argument is $2"

I executed the script providing two arguments to it:

[root@centops ~]# ./bargs.sh sahil suri
The name of the script is ./bargs.sh
The first argument is sahil
The second argument is suri


$#
The argument $# represents the number of arguments typed with the script during execution. This is useful for applying a condition where in you want a script to run only when the user enters a required number of arguments. I updated the bargs.sh script as follows:

[root@centops ~]# cat bargs.sh
#!/bin/bash

echo "The name of the script is $0"

echo "The first argument is $1"

echo "The second argument is $2"

echo "you entered $# arguments"

The output from this script is as follows:

[root@centops ~]# ./bargs.sh sahil suri
The name of the script is ./bargs.sh
The first argument is sahil
The second argument is suri
you entered 2 arguments


$* and $@
Both these arguments basically perform the same function, They hold the arguments that were entered with the shell script. But there is a subtle difference in the interpretation. I've expanded the script bargs.sh further to illustrate their usage.

[root@centops ~]# cat bargs.sh
#!/bin/bash

echo "The name of the script is $0"

echo "The first argument is $1"

echo "The second argument is $2"

echo "you entered $# arguments"

echo "The arguments entered are $*"

echo "The arguments entered are $@"

The output of the script is as follows:

[root@centops ~]# ./bargs.sh sahil suri
The name of the script is ./bargs.sh
The first argument is sahil
The second argument is suri
you entered 2 arguments
The arguments entered are sahil suri
The arguments entered are sahil suri

The difference between $* & $@ isn't apparent from the above example. From what I've understood, the arguments $* and $@ will produce the same result unless we change the value of IFS (internal field separator) which is a white space by default. $* counts each argument as an individual string whereas $@ interprets all the arguments as a single string. 
With that understood, lets see an example to demonstrate this.

I've updated our bargs.sh script as follows:

[root@cclient1 ~]# cat bargs.sh
#!/bin/bash

echo -e "\e[34m illustrating bash arguments  \e[0m"

echo "the script name with path is:" $0
echo "the script name is:" `basename $0`
echo "the script location is:" `dirname $0`

echo "the 1st argument is:" $1
echo "the 2nd argument is:" $2

echo "the number of arguments are:" $#

echo "the arguments entered are" $*
echo "the arguments entered are" $@


echo "Changing field separator"
IFS='-'
echo "the arguments using \$*" "$*"
echo "the arguments using \$@" "$@"


The output from this script is:

[root@cclient1 ~]# ./bargs.sh sahil suri
        illustrating bash arguments
the script name with path is: ./bargs.sh
the script name is: bargs.sh
the script location is: .
the 1st argument is: sahil
the 2nd argument is: suri
the number of arguments are: 2
the arguments entered are sahil suri
the arguments entered are sahil suri
Changing field separator
the arguments using $* sahil-suri
the arguments using $@ sahil suri

In the above example I changed the value of IFS from the default white space to -. So $* interpreted sahil & suri as separate arguments & placed a - as separator between them but $@ did not.


$?
This isn't an argument supplied to a shell script. But I felt the need to mention this anyway in this article. The $? variable stores the exit status of the immediately executed command.An exit status of zero indicates successful execution of the program whereas a non-zero exit status indicates an unsuccessful execution. For example

[root@centops ~]# ls /
bin  boot  cgroup  check_dir  dev  etc  home  lib  lib64  lost+found  media  mnt  opt  Packages  proc  quadstor  R_D  root  sbin  selinux  srv  sys  tmp  usr  var
[root@centops ~]# echo $?
0
[root@centops ~]# ls /sahil
ls: cannot access /sahil: No such file or directory
[root@centops ~]# echo $?
2

This variable is critical to scripting because using it's value we can shape the flow of the script.

Tuesday, 22 November 2016

PAM modification required for AD integration in Linux

A centralized authentication mechanism is essential in any medium to large environments. In Linux, we can integrate client servers with Active Directory such that users logging into the systems will be authenticated via their AD credentials provided they have privileges to log into the system. For doing the client side AD integration we can use winbind or SSSD. This article is written using the SSSD method.
In this article I won't dwell into the complete setup for client side configuration for centralized AD authentication but instead focus on a subset of the process which in this case is the PAM part.

In order to configure active directory authentication on Linux client servers, we basically need to modify three PAM related files under /etc/pam.d. They are:

  1. password-auth
  2. system-auth
  3. sshd

Given below are the three files with the required modifications in place:

[root@localhost pam.d]# cat password-auth
#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth        required      pam_env.so
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 500 quiet
auth        sufficient    pam_sss.so use_first_pass
auth        required      pam_deny.so

account     required      pam_unix.so
account     sufficient    pam_localuser.so
account     sufficient    pam_succeed_if.so uid < 500 quiet
account     [default=bad success=ok user_unknown=ignore] pam_sss.so
account     required      pam_permit.so

password    requisite     pam_cracklib.so try_first_pass retry=5 type= minlen=8 dcredit=-1 ucredit=-1 ocredit=-1 lcredit=-1
password    sufficient    pam_unix.so sha512 shadow nullok try_first_pass use_authtok
password    sufficient    pam_sss.so use_authtok
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
session     optional      pam_oddjob_mkhomedir.so umask=0077
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so
session     optional      pam_sss.so
[root@localhost pam.d]#

[root@localhost pam.d]# cat system-auth
#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth        required      pam_listfile.so onerr=fail item=group sense=allow file=/etc/login.group.allowed
auth        required      pam_env.so
auth        sufficient    pam_fprintd.so
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 500 quiet
auth        sufficient    pam_sss.so use_first_pass
auth        required      pam_deny.so

account     required      pam_unix.so
account     sufficient    pam_localuser.so
account     sufficient    pam_succeed_if.so uid < 500 quiet
account     [default=bad success=ok user_unknown=ignore] pam_sss.so
account     required      pam_permit.so

password    requisite     pam_cracklib.so try_first_pass retry=5 type= minlen=8 dcredit=-1 ucredit=-1 ocredit=-1 lcredit=-1
password    sufficient    pam_unix.so sha512 shadow nullok try_first_pass use_authtok
password    sufficient    pam_sss.so use_authtok
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
session     optional      pam_oddjob_mkhomedir.so umask=0077
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so
session     optional      pam_sss.so
[root@localhost pam.d]#
[root@localhost pam.d]#

[root@localhost pam.d]# cat sshd
#%PAM-1.0
auth        required      pam_listfile.so onerr=fail item=group sense=allow file=/etc/login.group.allowed
auth       required     pam_sepermit.so
auth       include      password-auth
account    required     pam_nologin.so
account    include      password-auth
password   include      password-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open env_params
session    optional     pam_keyinit.so force revoke
session    include      password-auth
[root@localhost pam.d]#


To summarize, we've used two more PAM modules in addition to the default ones already in the files at install time which are pam_listfile.so & pam_sss.so.

Now, let's dive in to the details of these two modules:

pam_sss.so:

This is the PAM interface to the System Security Services daemon (SSSD). 
Given below is a description of the options available with this module:

quiet
Suppress log messages for unknown users.

forward_pass
If forward_pass is set the entered password is put on the stack for other PAM modules to use.

use_first_pass
The argument use_first_pass forces the module to use a previous stacked modules password and will never prompt the user - if no password is available or the password is not appropriate, the user will be denied access.

use_authtok
When password changing enforce the module to set the new password to the one provided by a previously stacked password module.

retry=N
If specified the user is asked another N times for a password if authentication fails. Default is 0.
Please note that this option might not work as expected if the application calling PAM handles the user dialog on its own. A typical example is sshd with PasswordAuthentication.


pam_listfile.so:

This module is used to determine which users will be allowed access to the servers. We can use it without in conjunction with an AD integration setup as well as it's function is to determine which users will be allowed access based on entries in a file. Given below is a description of the options available with this module:

item=[tty|user|rhost|ruser|group|shell]
What is listed in the file and should be checked for.

sense=[allow|deny]
Action to take if found in file, if the item is NOT found in the file, then the opposite action is requested.

file=/path/filename
File containing one item per line. The file needs to be a plain file and not world writable.

onerr=[succeed|fail]
What to do if something weird happens like being unable to open the file.

apply=[user|@group]
Restrict the user class for which the restriction apply. Note that with item=[user|ruser|group] this does not make sense, but for item=[tty|rhost|shell] it have a meaning.

quiet
Do not treat service refusals or missing list files as errors that need to be logged.


With this understood I'll just elaborate on the statement we've used in the above example files for password-auth & system-auth files:

auth        required      pam_listfile.so onerr=fail item=group sense=allow file=/etc/login.group.allowed

The pam_listfile.so module will check for user group entries in the file /etc/login.group.allowed. If a user is a member of a group mentioned in the file, the user will be allowed to access the server. If the user is not a member of a group mentioned in the file, the user is denied access.

Here's a sample login.group.allowed file:

cat /etc/login.group.allowed
root
opcgrp
unixadmin
oinstall
SG-INFRA-HPOV-L2
SG-INFRA-Unix-L2

In the above file, the groups SG-INFRA-HPOV-L2 & SG-INFRA-Unix-L2 are created on Active Directory & members of these groups will be allowed login access to the server. The remaining groups are created locally on the OS & users belong to these groups will also be allowed access to the server.

Assuming that the SG-INFRA-HPOV-L2 & SG-INFRA-Unix-L2 group members are HP-OV & UNIX admin team members respectively, we'll make the below additions in /etc/sudoers file:

%SG-INFRA-Unix-L2 ALL=(ALL) NOPASSWD: ALL
%SG-INFRA-HPOV-L2 ALL=(ALL) NOPASSWD: /bin/su - hpov_user


The effect of these modifications is instantaneous & does not require any service restart or a system reboot.


Just a couple of gotchas:
  • If a user is not member of a group mentioned in the file but has ssh keys exchanged, the user will be granted access based on key exchange authentication.
  • In my experience if we use the pam_sss.so module in conjunction with the failllock or tally2 modules keeping all other parameters the same, then no users are able to login to the system.

Sunday, 20 November 2016

Disabling password aging in HP-UX

Disabling password aging in any operating system is a security risk but if the concerned system is intended for some sort of file transfer use like automated sftp file transfers then dealing with disabled passwords every couple of months can cause some issues.
This article describes the process to to disable password aging policy globally for all users in HP-UX.

We'll be using SAM for this. So, as root user type sam on the command line & the SAM TUI menu will open.

From there navigate to Auditing & security > System security policies


At the System security policies menu press the space bar to edit the policies.


From here you can see that password aging is currently enabled. You can disable it & press ok to save changes.


Working with tasksel on Ubuntu 16.04 LTS


tasksel is a neat utility installed in ubuntu 16.04 by default which allows the user to install groups of software packages & associated dependencies together as a single task.
This can be useful for example if we want to deploy the LAMP stack or if we have a scheduled activity coming up, we can pro-actively write a taskel task to suit our requirements & execute the task during the change time window.

To view a list of currently available tasks, type taskel to view the TUI or taskel --list-tasks for pure CLI output.




Creating custom tasksel tasks:


Got to the directory /usr/share/tasksel/descs. This contains the task descriptions which we saw when we executed the tasksel/tasksel --list-tasks commands.

root@buntu:/usr/share/tasksel/descs# pwd
/usr/share/tasksel/descs
root@buntu:/usr/share/tasksel/descs# ls
debian-tasks.desc  ubuntu-tasks.descs

We can edit one of these descriptions or write our own description.

So, without any further ado let's write ourselves a description.
We'll be writing it under /usr/share/tasksel/descs as the system will gather descriptions from files with .desc extension from this directory alone.
Here is a task file named my.desc I wrote for demonstration purposes.

Task: testing-tasksel
Relevance: 3
Description: A tasksel task to test out tasksel
 A tasksel task to test out tasksel
Key:
Packages: list
 tree
 apache2

The task directive is the title of the task.
The relevance defines how far up the tasksel menu we would like the task to appear.
Description is the information about the task which we'll view in the tasksel menu.
The key attribute would consist of any dependent packages which should already be installed in ordfer for this task to work.
Finally in the packages directive we list the packages that need to be installed as a part of our task.

Now if we type tasksel we would be able to see the task description of our newly created task in the menu.



To run this task type #taskel install <task-title> from the command line or use the space bar to select the task from the menu & press ok to run.



Removing the packages installed under a task is just as easy as installing them.
Just type #tasksel remove <task-title>

Script to report number of characters, words & lines in a file

While browsing through a shell scripting forum I came across a problem statement stating that someone wanted to write a script to report number of characters, words & lines in a file without using the wc command which would have been well very straight forward. 

So I decided to give it a try & after a while to some hits & misses I got a working script ready. 
Here is the script:

[root@cclient1 ~]# cat filecheck.sh
#!/bin/bash

case $2 in

-h) echo -e "The number of letters in file are: \n"

        b=0
        for i in `cat $1 | tr -d " "`
        do
        a=$(expr length $i)
        let b+=$a
        done
        echo $b
        ;;

-k) echo -e "The number of words in the file are: \n"

    awk  '{total=total+NF}; END {print total  }' $1
        ;;

-s) echo -e "The number of lines in file are: \n"
    awk  ' END {print NR }' $1
        ;;

*) echo "Incorrect usage"
    ;;

esac


Below is a description of what's happening.
  • In the first part, the for loop iterates through each line of the file. I used tr to remove spaces so that the entire line is treated as a single string. Then I used expr built in to calculate the length of the string & finally used let to add the values for string lengths of the individual lines.
  • In the second part, I used the number of fields built in variable in awk to sum up the number of fields in each line & print the final result.
  • In the final part, I used the number of records built in from awk to display the number of lines in the file.

Here's a demo of the script in action:

[root@cclient1 ~]# cat test
sa hi l su ri
un ix li n ux
hp ux


[root@cclient1 ~]# ./filecheck.sh test -h
The number of letters in file are:

22
[root@cclient1 ~]# ./filecheck.sh test -k
The number of words in the file are:

12

[root@cclient1 ~]# ./filecheck.sh test -s
The number of lines in file are:

3

Friday, 18 November 2016

Quick one liner to get swap usage in Solaris 10

Calculating swap usage is Solaris without top or prstat isn't easy since the output of 'swap -s' doesn't exactly paint an easily decipherable picture.

So what do we do?

We do some piping, some translation with tr & finally awk!

Take this example of the 'swap -s' output from a Solaris 10 server.

root@localhost:/# swap -s
total: 11454880k bytes allocated + 0k reserved = 11454880k used, 55653984k available
root@localhost:/#

To make it really easy to interpret, I did this:

root@localhost:/# swap -s | tr -d "k$" | awk '{total= $9 + $11;} {print 100 * $9 / total, "% swap is used"}'
17.0699 % swap is used
root@localhost:/#

That's it!

Embedding some HTML in a shell script


Shell scripts are awesome & another awesome trick I came across recently is the ability to wrap our shell commands around some basic html tags & redirect the content to a html file. Then we can view the output of our shell commands in a web page. This can be pretty useful when we need to share some command outputs as reports within other internal teams or to upper management. A web page output may be more easy to read as compared to a text file.

Here is the script I wrote:

[root@centdb DB]# cat web.sh
#!/bin/bash

echo "<html>
<body text="blue">

<h1> $(hostname) </h1>
<h2> script ran at $(date) </h2>

<font face="verdana" size="4">

<pre>
<font color="red"> system uptime is</font>
$(uptime)
</pre>

<pre>
<font color="red"> system disk utilization is</font>
$(iostat -xt)
</pre>

<pre>
<font color="red">File system utilization is</font>
$(df -h)
</pre>

</font>

</body>
</html>" > web.html


It's some basic stuff. I echoed the content of the entire script body to the output html file web.html. I used basic commands like uptime, iostat & df to illustrate the usage. You can do more complex commands or even use functions if you'd like. 
As far as the html part is concerned, that too is basic stuff with some usage of the font tag to add some color here n there.

When you open the resultant web.html file in a browser, it looks like this:



Thursday, 17 November 2016

AWK cheat sheet


Similar to the sed cheat sheet I shared in the previous article here, this article will be an awk cheat sheet. All the examples illustrated here may not be entirely original as this is something I've compiled over the years while using awk. Without any further ado, here it goes:



AWK ExpressionDescription
awk '/l.c/{print}' /etc/hosts [do a regex match with string containing 1 cahrector b/w l & c]
awk '/l*c/{print}' /etc/hosts [regex match everything b/w charectors l & c]
awk '/[al1]/{print}' /etc/hosts [a regex match with strings containing charectors a,l or l in a line]
awk '/[0-9]/{print}' /etc/hosts [prints all lines with numbers in them]
awk '/^10./ {print}' /etc/hosts [print all lines beginning with number 10]
awk '/rs$/{print}' /etc/hosts [print all lines ending with rs]
awk '/\$25.00/{print}' somedta.txt [escaping the $ character]
awk '//{print $1, $2, $3; }' somedta.txt [print columns 1,2 & 3 with fields separated by a space]
awk '//{printf "%-10s %s\n",$2, $3 }' my_shopping.txt [imporves spacial formatting between fields]
awk '/ *\$[2-9]\.[0-9][0-9] */ { print $1, $2, $3, $4, "*" ; } / *\$[0-1]\.[0-9][0-9] */ { print ; }' food_prices.list[multiple pattern matches & awk commands separateed by ;]
"awk '/ *\$[2-9]\.[0-9][0-9] */ { printf ""%-10s %-10s %-10s %-10s\n"", $1, $2, $3, $4 ""*"" ; } / *\$[0-1]\.[0-9][0-9] */ { printf ""%-10s %-10s %-10s %-10s\n"", $1, $2, $3, $4; }' somedta.txt
"[use printf for improved formatting]
awk '/ *\$[2-9]\.[0-9][0-9] */ { print $0 "*" ; } / *\$[0-1]\.[0-9][0-9] */ { print ; }' somedta.txt [value $0 denotes entire line with awk]
awk '$3 <= 30 { printf "%s\t%s\n", $0,"**" ; } $3 > 30 { print $0 ;}' somedta.txt[match value of 3rd column & print result accordingly]
awk '($3 ~ /^\$[2-9][0-9]*\.[0-9][0-9]$/) && ($4=="Tech") { printf "%s\t%s\n",$0,"*"; } ' somedta.txt[example of using multiple conditions in single awk command]
awk '$4 <= 20 { printf "%s\t%s\n", $0,"*" ; next; } {print $0 ;}' somedta.txt[add a * at the end of the line if value in 4th column is less than or euqal to 20]
ls -l | awk '$3 != "sahil" {print}' [print files not owned by user sahil]
uname -a | awk 'hostname = $2 {print hostname}' [using variables. assigned value of 2nd field to a variable named hostname]
awk '/^example.com/ { counter=counter+1 ; printf "%s\n", counter ; }' somedta.txt [use numeric variable in a for loop to count occurrances of lines beginning with techmint.com]
awk '/^example.com/ { counter=counter+1 ;} END {printf "%s\n", counter ; }'somedta.txt [print only total number of times example.com occurs in file]
awk 'BEGIN {count=0} /^example.com/ {count+=1} END {printf "%s\n", count ;}'somedta.txt [result is same as above example but here we've used begin & end both]
ls -l | grep ^- | awk 'BEGIN {total=0} {total+=$5} END {print total/1024/1024}' [print the total size of files in current directory in MB]
awk 'BEGIN {print "this is a begin Test"} /^example.com/ { counter=counter+1 ;} END { printf "%s\n", counter ; }'somedta.txt [using begin & end. begin is executed before input lines are read. END is executed after all input lines are read]
awk '{print FILENAME}'somedta.txt [FILENAME is a built in which stores the file name. This awk command will print the file name as many times as the number of lines in the file]
awk '{print NR, "has", NF, "fields" $1}'somedta.txt [NR is number of records/rows. NF is the number of fields/columns]
awk ' END { print "Number of records in file is: ", NR } 'somedta.txt [print total number of rows]
awk -F':' '{ print $1, $4 ;}' /etc/passwd [change input filed separater]
awk -F':' '$1 == "sahil" {print}' /etc/passwd [match user sahil in passwd file & print the matching line]
awk -F':' '/sahil/ {print "user", $1,"has shell", $7}' /etc/passwd [search for user sahil in passwd file & print user name & shell]
awk -F':' '{if($1 == "sahil") print ;}' passwd [does the same as above example but using if condition]
awk -F';' '{if ($1 == "12345") {$6 = 5000;} {OFS = ";"} {print $0;}}' file.txt[if 1st column has value 12345 then change value of 6th column to 5000]
awk ' BEGIN { FS=":" ; } { print $1, $4 ; } ' /etc/passwd [change input field separater 2nd method]
awk -F':' ' BEGIN { OFS="==>" ;} { print $1, $4 ;}' /etc/passwd [change input & output field separater]
user=root ; awk "/$user/ {print}" /etc/passwd [use shell variable in an awk statement]
awk 'BEGIN{ for(count=0;count<=5;count++){ print "sometext"} }' [for loop in awk. This prints the string sometext 5 times to stdout]
awk 'IGNORECASE = 1; /SaHil/ {print ;}' somedata.txt [do a case insensitive search with AWK]
echo "sahil" | awk '{print substr ($1,1,2)}' [use substring function in awk to chop off part of a string]
awk 'sub ("example.com", "test.com",$1)'somedta.txt [replace all occurances of example.com in 1st column with sahil.com]
"awk 'BEGIN {count=0}
{ if($1 == ""example.com"")
{count++}
if(count == 3)
{ sub(""example.com"",""UNIX"",$1)}}
{ print $0}' somedta.txt
"[replace 3rd occurance of example.com with UNIX]
awk '{if ($NR%2 ==0) {print $0, "\n TESTLINE"} else {print $0}}' somedta.txt [insert the word TESTLINE in a newline after every line]
awk 'NR >3 && NR < 6 {print}' somedta.txt[print line number 4 & 5 from the file]
awk '!/^$/' somedta.txt [remove blank lines from a file]
awk 'NR%2{printf "%s ",$0;next;}1' somedta.txt[join 2 line. replace newline with a space]
awk '{printf $0;printf " "}NR % 2 ==0 {print " "}' somedta.txt[join 2 line. replace newline with a space]
awk '{printf $0;printf " "}NR % 3 ==0 {print " "}' somedta.txt[join 3 line. replace newline with a space]
awk '{ print $NF }' somedta.txt [print last column in a file]
df -hTP | awk '{gsub(/%/,"")}1 {print $1,$6}' [Replace the % character with a space from df -h output]
df -hTP | awk '{gsub(/%/,"",$6)}1 {print $1,$6}' [Replace the % character with a space from df -h output limited to 6th column]

Using capture groups in grep in Linux

Introduction Let me start by saying that this article isn't about capture groups in grep per se. What we are going to do here with gr...