Thursday, 29 June 2017

Shell script to add a public key to multiple users

In this article I'd like to share a script I wrote to add the rsa_id.pub public key of a user belonging to one server to a given list of users present on another server. The script will be executed on the destination server.

The task sounds simple enough to accomplish with the help of a simple for loop but I've added some checks to make the script more user friendly  so that you may be in a position to share this script with your clolleagues and perhaps even end users if they have sufficient privileges to add keys. I've added some code that I might like to reuse in the future.

So, here is the script:

#!/bin/bash

###############################################
#Purpose: Add public ssh key to multiple users                #
#Author; Sahil Suri                                                            #
#date: 24/06/2017                                                              #
###############################################

cat /dev/null > sftp_user_not_found.txt

cat /dev/null > sftp_user_key_add.txt

#This will only check for existance of the user and .ssh directory within home directory of the user#

function check_dir() {
echo "Please supply list of users: "
read USER_LIST

for USERNAME in `cat ${USER_LIST}`

do

echo "Checking if user ${USERNAME} exists"

grep -w ${USERNAME} /etc/passwd > /dev/null

if [ $? -eq 0 ]
 then
   echo -e "The user ${USERNAME} exists on server $(hostname) \n"
   echo "The user ${USERNAME} exists on server $(hostname)"
   HOME_DIR=$(grep -w ${USERNAME} /etc/passwd | awk -F: '{print $6}')
   HOME_DIR_PERM=$(ls -ld ${HOME_DIR} | awk '{print $1}' | tr -d "d")
   GID_NUMERIC=$(grep -w ${USERNAME} /etc/passwd | awk -F: '{print $4}')
   GID_WORD=$(grep -w ${GID_NUMERIC} /etc/group | awk -F: '{print $1}')

   echo -e "Home directory for user ${USERNAME} is ${HOME_DIR} and its permissions are ${HOME_DIR_PERM} \n"
   echo "Home directory for user ${USERNAME} is ${HOME_DIR} and its permissions are ${HOME_DIR_PERM}" >> sftp_user_key_add.txt

   echo "Checking if .ssh directory exists for user ${USERNAME}"
   ls -l ${HOME_DIR}/.ssh > /dev/null 2> /dev/null
    if [ $? -ne 0 ]
         then
          echo -e ".ssh directory does not exist for user ${USERNAME} \n"
    fi

else
   echo -e "The user ${USERNAME} does not exist on server $(hostname) \n"
   echo "The user ${USERNAME} does not exist on server $(hostname) " >> sftp_user_not_found.txt
fi

done

}


#This will check for existance of the user and .ssh directory within home directory of the user#
#and add the key supplied to it in authorized_keys file for the user list provided to it#

function add_key() {

echo "Please supply list of users: "
read USER_LIST

echo "Please supply the key file you wish to append authorized_keys files of these users: "
read KEY_FILE

for USERNAME in `cat ${USER_LIST}`

do

echo "Checking if user ${USERNAME} exists"

grep -w ${USERNAME} /etc/passwd > /dev/null

if [ $? -eq 0 ]
 then
   echo -e "The user ${USERNAME} exists on server $(hostname) \n"
   echo "The user ${USERNAME} exists on server $(hostname)"
   HOME_DIR=$(grep -w ${USERNAME} /etc/passwd | awk -F: '{print $6}')
   HOME_DIR_PERM=$(ls -ld ${HOME_DIR} | awk '{print $1}' | tr -d "d")
   GID_NUMERIC=$(grep -w ${USERNAME} /etc/passwd | awk -F: '{print $4}')
   GID_WORD=$(grep -w ${GID_NUMERIC} /etc/group | awk -F: '{print $1}')

   echo -e "Home directory for user ${USERNAME} is ${HOME_DIR} and its permissions are ${HOME_DIR_PERM} \n"
   echo "Home directory for user ${USERNAME} is ${HOME_DIR} and its permissions are ${HOME_DIR_PERM}" >> sftp_user_key_add.txt

   echo "Checking if .ssh directory exists for user ${USERNAME}"
   ls -l ${HOME_DIR}/.ssh > /dev/null 2> /dev/null
    if [ $? -ne 0 ]
         then
          echo -e ".ssh directory does not exist for user ${USERNAME} \n"
          echo "Creating .ssh directory and setting permissions for ${USERNAME} user"
          sudo mkdir ${HOME_DIR}/.ssh
          sudo chmod 700 ${HOME_DIR}/.ssh
          sudo chown ${USERNAME}:${GID_WORD} ${HOME_DIR}/.ssh
          sudo touch ${HOME_DIR}/.ssh/authorized_keys
          sudo chown ${USERNAME}:${GID_WORD} ${HOME_DIR}/.ssh/authorized_keys
          sudo chmod 644 ${HOME_DIR}/.ssh/authorized_keys
          sudo cat ${KEY_FILE} >> ${HOME_DIR}/.ssh/authorized_keys
          sudo chmod 750 ${HOME_DIR}
        fi

else
   echo -e "The user ${USERNAME} does not exist on server $(hostname) \n"
   echo "The user ${USERNAME} does not exist on server $(hostname) " >> sftp_user_not_found.txt
fi

done

}

#pass an argument to the script suggesting how you'd like to execute it#

OPTION="$1"

if [ "$#" -ne 1 ]
 then
  echo "You must specify an option to run the script"
  echo "-h for help or -r to run the script. Exiting now"
  exit
fi

#Required case statement logic to implement user option#

case $OPTION in
 "-h")
    echo "Displaying help"
        echo "To run the script type: ./add_public_key.bash -r"
        echo "You need to become root via sudo su - before running the script"
        echo "Enter the file containing list of users within whose home key is to be added when prompted"
        echo "Enter the file containing the public key when prompted"
        ;;
 "-r")
    echo "Running the script"
        echo "type 1 and press enter if you wish to check for the presence of .ssh directory"
        echo "type 2 and press enter if you wish to copy the key file right now"
        read USER_SEL

        if [ $USER_SEL -eq 1 ]
         then
          check_dir
        elif [ $USER_SEL -eq 2 ]
          then
           add_key
        else
           echo "Invalid option"
          exit
         fi
        ;;
  *)
    echo "invalid execution"
        ;;
esac


The script can be run in two modes:

Help mode: If you add the -h option with the script it'll run in help mode displaying information on how to use the script.

Run/execution mode: If you add the -r option with the script then it'll run in execution mode and you'll be prompted for yet another selection this time you can choose to check for the presence of .ssh directory for the list of users or add the key for the source user from where password less authentication needs to be configured.

The script has built in checks to look for the presence of the user supplied in the list, check for the presenece of .ssh directories of the users, a check to terminate the script if the user does not supply any options and I've also added logging of some of the command output.

I hope this has been a nice read and look forward towards your feedback.

Wednesday, 28 June 2017

Script to check open ports in Linux

In this brief article I'd like to share a short script I recently wrote to check for port status for different ports in Linux. This script may prove useful as a pre-check or post-check after a maintenance activity or you can also put the script in cron if you need to monitor the port number corresponding to a service at regular intervals and relying on conventional monitoring tools is not an option.

Here is the script:

[root@still ~]# cat port_check.bash
#!/bin/bash

##Add a file containing a list of port numbers to chek##

PORT_LIST="/root/plist"

while read PNUM
do

netstat -tulpn | grep -w ":${PNUM}" > /dev/null

if [ $? -eq 0 ]
 then
  echo "port number ${PNUM} is listening on `hostname`"
else
 echo "port number ${PNUM} is not listening on `hostname`"
fi

done < ${PORT_LIST}


To test it I've created a file ptest with some port numbers to test the script.

[root@still ~]# cat /root/plist
80
22
30


Let's run the script:

[root@still ~]# ./port_check.bash
port number 80 is listening on still
port number 22 is listening on still
port number 30 is not listening on still


This is more of an arbritrary setup. You can add logic to send an email to you if any of the ports are not found to be in listening state

Tuesday, 27 June 2017

Script to check if all fstab entries are mounted

In this article I'd like to share a script which would compare file system entries in the /etc/fstab file with the file systems that are currently mounted and tell us if there is any file system which has an entry in the /etc/fstab file but is not mounted.

So, here is the script:

[root@still ~]# cat tab.bash
#!/bin/bash

FSTAB_ENTRIES=$(cat /etc/fstab | awk '$1 !~/#|^$|swap/ {print $2}')

for FS in ${FSTAB_ENTRIES}

do

df -hPT | grep -wq ${FS}

if [ $? -eq 0 ]
 then
  echo "The file system ${FS} has an entry in /etc/fstab file and is mounted"
 else
  echo "The file system ${FS} has an entry in /etc/fstab file but is not mounted"
fi

done
[root@still ~]#


Let's test it out.

This is my /etc/fstab file and the current df output.

[root@still ~]# cat /etc/fstab ; df -hTP

#
# /etc/fstab
# Created by anaconda on Sat Dec 24 12:03:40 2016
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        1 1
UUID=93b957e6-f8f9-42f9-a7ad-9927f694f1ce /boot                   xfs     defaults        1 2
/dev/mapper/centos-swap swap                    swap    defaults        0 0

##testing##
/dev/vg1/lv01   /test_dir1                       ext4     defaults        1 1
/dev/vg1/lv02   /test_dir2                       ext4     defaults        1 1
/dev/vg1/lv03   /test_dir3                       ext4     defaults        1 1
/dev/vg1/lv04   /test_dir4                       ext4     defaults        1 1
Filesystem              Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root xfs        18G  6.0G   12G  35% /
devtmpfs                devtmpfs  481M     0  481M   0% /dev
tmpfs                   tmpfs     490M   80K  490M   1% /dev/shm
tmpfs                   tmpfs     490M  7.1M  483M   2% /run
tmpfs                   tmpfs     490M     0  490M   0% /sys/fs/cgroup
/dev/sda1               xfs       497M  134M  363M  27% /boot
/dev/mapper/vg1-lv01    ext4      283M  2.1M  262M   1% /test_dir1
/dev/mapper/vg1-lv02    ext4      283M  2.1M  262M   1% /test_dir2


You can notice that /test_dir3 and /test_dir4 file systems have entries in the /etc/fstab file but are not mounted at the moment.

Let's run the script to verify.

[root@still ~]# ./tab.bash
The file system / has an entry in /etc/fstab file and is mounted
The file system /boot has an entry in /etc/fstab file and is mounted
The file system /test_dir1 has an entry in /etc/fstab file and is mounted
The file system /test_dir2 has an entry in /etc/fstab file and is mounted
The file system /test_dir3 has an entry in /etc/fstab file but is not mounted
The file system /test_dir4 has an entry in /etc/fstab file but is not mounted


There you have it.

I hope this script helps you in being used as a pre-check or post-check during maintenance activities.

Configuring AutoFS in Solaris 10

Introduction

Automounting enables a system to automatically mount and unmount NFS resources wheneve they are accessed. The resource remains mounted as long as the directory is in use and if not accessed for a certain period of time the directory gets automatically unmounted.

Automounting provides the following features:

Saves boot time by not mounting resources when the system boots.
Silently mounts and unmounts resources without the need of superuser prviliege.
Reduces netwrok traffic because NFS resources are mounted only when in use.

The client side service uses the automount command, the autofs file system and automountd daemon to automatically mount file systems on demand.


Working:

The automount service svc:/system/filesystem/autofs:default reads the master map file auto_master to create the initial set of mounts at system startup. These mounts are points under which the file systems are mounted when access requests are received. After the initial mounts are made the automount command is used to update the autofs mounts as necssery.

The automount service uses the following maps to perform automounting of file systems in demand:

Master Maps:
The master map auto_master determines the location of all autofs mount points. Given below is a sample auto_master file.

root@sandbox:/# cat /etc/auto_master
#
# Copyright 2003 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
# ident "@(#)auto_master        1.8     03/04/28 SMI"
#
# Master map for automounter
#
+auto_master
/net            -hosts          -nosuid,nobrowse
/home           auto_home       -nobrowse


Direct Map:
A direct map is an automount point. In this there is a direct association between a mount point on the client and a directory on the server. Direct map entries are preceeded by /- in the auto_master file.


Indirect Map:
An indirect map uses a substitution value of a key to establish the association of a mount point on the client and a directory on the server. The auto_home is an example of an indirect map.


Demonstration:

For this demo I'll perform an automount operation for a ZFS dataset to a client.

On the server:

Create the zpool and the corresponding dataset to be shared.

root@sandbox:/# zpool create spool c2t0d0
root@sandbox:/# zfs create spool/sfs

Check value of sharenfs property

root@sandbox:/# zfs get sharenfs spool/sfs
NAME       PROPERTY  VALUE     SOURCE
spool/sfs  sharenfs  off       default

Turn on the sharenfs property and verify that it's on.

root@sandbox:/# zfs set sharenfs=on spool/sfs

root@sandbox:/# zfs get sharenfs spool/sfs
NAME       PROPERTY  VALUE     SOURCE
spool/sfs  sharenfs  on        local

Add entry in /etc/dfs/dfstab file.

root@sandbox:/# grep sfs /etc/dfs/dfstab
share  -F nfs  -o rw -d "test share" /spool/sfs
root@sandbox:/#

Use the share command to activate the nfs shares.

root@sandbox:/# share
-               /spool/sfs   rw   ""

Verify that the share is active.

root@sandbox:/# showmount -e sandbox
export list for sandbox:
/spool/sfs (everyone)


On the client:

Check that the automount service is running.

root@trick:/# ps -ef | grep -w automountd
    root   528   527   0 13:48:16 ?           0:00 /usr/lib/autofs/automountd
    root   527     1   0 13:48:16 ?           0:00 /usr/lib/autofs/automountd
root@trick:/# svcs -a | grep -w autofs
online         14:04:34 svc:/system/filesystem/autofs:default

Create directory to use as automount mount pont.

root@trick:/# mkdir /auto

I'll be using a direct map named auto_test for the purpose of this demonstration. Now I'll add it's entry in /etc/auot_master file.

root@trick:/# grep test /etc/auto_master
/-      auto_test       -nosuid

Create the auto_test direct map file in /etc

root@trick:/# cat /etc/auto_test
/auto   -rw,nosuid      sandbox:/spool/sfs

Run automount -v to refresh the maps.

root@trick:/# automount -v
automount: /auto mounted
automount: no unmounts

Check if /auto gets automatically mounted when accessed.

root@trick:/# date
Tue Jun 27 15:14:51 IST 2017
root@trick:/# df -h /auto/
Filesystem             size   used  avail capacity  Mounted on
auto_test                0K     0K     0K     0%    /auto

root@trick:/# cd /auto/
root@trick:/auto# df -h .
Filesystem             size   used  avail capacity  Mounted on
sandbox:/spool/sfs     976M    21K   976M     1%    /auto
root@trick:/auto# date
Tue Jun 27 15:15:09 IST 2017
root@trick:/auto# df -h /auto/
Filesystem             size   used  avail capacity  Mounted on
sandbox:/spool/sfs     976M    21K   976M     1%    /auto

Friday, 23 June 2017

Booting to the none milestone in Solaris 10 x86 architecture system

In this brief article I'll describe how to boot a Solaris x86 system into the none milestone for troubleshooting purposes. A milestone is basically a designated point during the Solaris boot process which denotes the activation of a certain set of services. These milestones can be considered analogous to run levels.

The available milestones are listed below filtered from the "svcs -a" output:


An optimally functioning system would boot to svc:/milestone/multi-user-server:default milestone which is analogous to run level 3.
Here's some more information about the multi-user-server milestone:

root@sandbox:/# svcs -l svc:/milestone/multi-user-server:default
fmri         svc:/milestone/multi-user-server:default
name         multi-user plus exports milestone
enabled      true
state        online
next_state   none
state_time   Fri Jun 23 21:57:58 2017
logfile      /var/svc/log/milestone-multi-user-server:default.log
restarter    svc:/system/svc/restarter:default
dependency   require_all/none svc:/milestone/multi-user (online)
dependency   optional_all/none svc:/application/management/dmi (online)
dependency   optional_all/none svc:/application/management/snmpdx (online)
dependency   optional_all/none svc:/network/ssh (online)
dependency   optional_all/none svc:/network/dhcp-server (disabled)
dependency   optional_all/none svc:/network/samba (disabled)
dependency   optional_all/none svc:/network/rarp (disabled)
dependency   optional_all/none svc:/network/nfs/server (disabled)
dependency   optional_all/none svc:/network/winbind (disabled)
dependency   optional_all/none svc:/network/rpc/bootparams (disabled)
dependency   optional_all/none svc:/network/wins (disabled)


If the server is unable to boot to a functaional or usable state the alternative is to boot into single user mode and troubleshoot the issue. In Solaris in single user mode no login services ecept for sulogin on the cosole are running. Apart from single user mode we also have the option of booting the server into the none milestone. It's availble fe although not listed within svcs -a output.
If a problem prevents user programs from starting normally then a Solaris 10 system can be instrcuted to start as few porgrams as possible during boot by specifying the -m milestone=none in the boot arguments. Once logged in the svcadm milestone all command can be issued to instruct SMF to continue initialization as usual.

Now, we'll go through a demo.

Reboot the Solaris system and interrupt the boot process by typing e and we will be presented with the below screen:



Use the arrow key to move to the line beginning with the word kernel and press e to enter the grub promtp to pass a boot argument.

Once here type -m milestone=none after $ZFS-BOOTFS as shown below


Now press enter. We are back to the previous screen but now we can observe that the boot argument we provided will be passed to the kernel during system startup.


Press b to continue with boot.

After the system boots we observe the message stating the system has booted to milestone none and we are prompted to enter the root password for system maintenance.



A this point we can login and perform any troubleshooting actions that are required and once done we need to type svcadm milestone all to instruct SMF to proceed with the system initialization process as shown below


Wednesday, 21 June 2017

Boot Environments with Live Upgrade under the hood

Every solaris admin who patch their Solaris infrastrucuture would be familiar with the live upgrade feature of Solaris. I consider it to be used essentially with ZFS although it can be used with UFS as well. In this article I'll primarily focus on what happens when we create and activate a boot environment. I'll not be demonstrating a live upgrade patching procedure in this article.

A boot environment is basically a bootable instance of the Solaris operating system essentially comprised of a root dataset and other optional datasets underneath. A dataset is a generic name to  refer to a ZFS entity. With respect to boot environments a zfs dataset will refer to components which make up the boot environment. This is usually the root zpool mostly named rpool by convention.

I'd like to clarify that for the purpose of this demonstration I'm using Solaris 10 with the root zpool named rpool and there is no separate /var dataset.

When we create a boot environment, the live upgrade utility provided within the operating system takes a snapshot of the root zfs file system, clones that snapshot and populates the new boot environment from it. Aside from the root file system any other datasets would be present in both the active and inactive boot environments which is I find amazing!

Let's get into the command line for a demonstration.

First we'll get the current status of our rpool file systems.

root@sandbox:/# zfs list -r rpool
NAME                        USED  AVAIL  REFER  MOUNTPOINT
rpool                      5.64G  9.98G    34K  /rpool
rpool/ROOT                 3.64G  9.98G    21K  legacy
rpool/ROOT/s10x_u8wos_08a  3.64G  9.98G  3.64G  /
rpool/dump                 1.00G  9.98G  1.00G  -
rpool/export                 44K  9.98G    23K  /export
rpool/export/home            21K  9.98G    21K  /export/home
rpool/swap                    1G  11.0G    16K  -


Now let's run lustatus to check if there are any boot environments created on the system.

root@sandbox:/# lustatus
ERROR: No boot environments are configured on this system
ERROR: cannot determine list of all boot environment names
root@sandbox:/#

This is a fresh install so lustatus shows no BEs. Now we create one.

root@sandbox:/# lucreate -n testBE
Checking GRUB menu...
Analyzing system configuration.
No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name <s10x_u8wos_08a>.
Current boot environment is named <s10x_u8wos_08a>.
Creating initial configuration for primary boot environment <s10x_u8wos_08a>.
The device </dev/dsk/c0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <s10x_u8wos_08a> PBE Boot Device </dev/dsk/c0d0s0>.
Comparing source boot environment <s10x_u8wos_08a> file systems with the
file system(s) you specified for the new boot environment. Determining
which file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <testBE>.
Source boot environment is <s10x_u8wos_08a>.
Creating boot environment <testBE>.
Cloning file systems from boot environment <s10x_u8wos_08a> to create boot environment <testBE>.
Creating snapshot for <rpool/ROOT/s10x_u8wos_08a> on <rpool/ROOT/s10x_u8wos_08a@testBE>.
Creating clone for <rpool/ROOT/s10x_u8wos_08a@testBE> on <rpool/ROOT/testBE>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/testBE>.
WARNING: split filesystem </> file system type <zfs> cannot inherit
mount point options <-> from parent filesystem </> file
type <-> because the two file systems have different types.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <testBE> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <testBE> in GRUB menu
Population of boot environment <testBE> successful.
Creation of boot environment <testBE> successful.
root@sandbox:/#


Now that we have created a new boot environment let's check its status.

root@sandbox:/# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
s10x_u8wos_08a             yes      yes    yes       no     -
testBE                     yes      no     no        yes    -

After the creation of the boot environment a recursive view of rpool looks like as follows:

root@sandbox:/# zfs list -r rpool
NAME                               USED  AVAIL  REFER  MOUNTPOINT
rpool                             5.64G  9.98G    36K  /rpool
rpool/ROOT                        3.64G  9.98G    21K  legacy
rpool/ROOT/s10x_u8wos_08a         3.64G  9.98G  3.64G  /
rpool/ROOT/s10x_u8wos_08a@testBE  68.5K      -  3.64G  -
rpool/ROOT/testBE                 99.5K  9.98G  3.64G  /
rpool/dump                        1.00G  9.98G  1.00G  -
rpool/export                        44K  9.98G    23K  /export
rpool/export/home                   21K  9.98G    21K  /export/home
rpool/swap                           1G  11.0G    16K  -
root@sandbox:/#

We can observe that the new dataset rpool/ROOT/testBE has been created.

The datasets associated with boot environments can be viewed with lufslist command as shown below:

root@sandbox:/# lufslist -n s10x_u8wos_08a
               boot environment name: s10x_u8wos_08a
               This boot environment is currently active.
               This boot environment will be active on next system boot.

Filesystem              fstype    device size Mounted on          Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/zvol/dsk/rpool/swap swap       1073741824 -                   -
rpool/ROOT/s10x_u8wos_08a zfs        3911067648 /                   -
rpool                   zfs        6059543040 /rpool              -
rpool/export            zfs             45056 /export             -
rpool/export/home       zfs             21504 /export/home        -
root@sandbox:/#

root@sandbox:/# lufslist -n testBE
               boot environment name: testBE

Filesystem              fstype    device size Mounted on          Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/zvol/dsk/rpool/swap swap       1073741824 -                   -
rpool/ROOT/testBE       zfs            103936 /                   -
rpool/export            zfs             45056 /export             -
rpool/export/home       zfs             21504 /export/home        -
rpool                   zfs        6059564544 /rpool              -
root@sandbox:/#


Now to prove that the testBE boot environment is in fact a clone of a snapshot of the root file system of the original boot environment s10x_u8wos_08a we'll check it's origin property.

The origin property of the zfs dataset allows us to determine the source of the dataset.

If I check for the origin property of my root zfs dataset rpool/ROOT/s10x_u8wos_08a, I get:

root@sandbox:/# zfs get origin rpool/ROOT/s10x_u8wos_08a
NAME                       PROPERTY  VALUE   SOURCE
rpool/ROOT/s10x_u8wos_08a  origin    -       -

The value is dashed out.

If I check the origin property for my new BEs' dataset rpool/ROOT/testBE I get:

root@sandbox:/# zfs get origin rpool/ROOT/testBE
NAME               PROPERTY  VALUE                             SOURCE
rpool/ROOT/testBE  origin    rpool/ROOT/s10x_u8wos_08a@testBE  -
root@sandbox:/#


There we have it. The source of this dataset is the snapshot of our original root dataset thereby also verifying that rpool/ROOT/testBE is a clone.

We can go ahead and patch the alternate boot environment but I'm not going to do that here.

Now let's activate testBE.

root@sandbox:/# luactivate testBE
Generating boot-sign, partition and slice information for PBE <s10x_u8wos_08a>
Saving existing file </etc/bootsign> in top level dataset for BE <s10x_u8wos_08a> as <mount-point>//etc/bootsign.prev.
A Live Upgrade Sync operation will be performed on startup of boot environment <testBE>.

Generating boot-sign for ABE <testBE>
Saving existing file </etc/bootsign> in top level dataset for BE <testBE> as <mount-point>//etc/bootsign.prev.
Generating partition and slice information for ABE <testBE>
Copied boot menu from top level dataset.
Generating multiboot menu entries for PBE.
Generating multiboot menu entries for ABE.
Disabling splashimage
Re-enabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.

**********************************************************************

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Boot from Solaris failsafe or boot in single user mode from the Solaris
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like
/mnt). You can use the following command to mount:

     mount -Fzfs /dev/dsk/c0d0s0 /mnt

3. Run <luactivate> utility with out any arguments from the Parent boot
environment root slice, as shown below:

     /mnt/sbin/luactivate

4. luactivate, activates the previous working boot environment and
indicates the result.

5. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File </etc/lu/installgrub.findroot> propagation successful
File </etc/lu/stage1.findroot> propagation successful
File </etc/lu/stage2.findroot> propagation successful
File </etc/lu/GRUB_capability> propagation successful
Deleting stale GRUB loader from all BEs.
File </etc/lu/installgrub.latest> deletion successful
File </etc/lu/stage1.latest> deletion successful
File </etc/lu/stage2.latest> deletion successful
Activation of boot environment <testBE> successful.
root@sandbox:/#


Now the lustatus will look like this:

root@sandbox:/# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
s10x_u8wos_08a             yes      yes    no        no     -
testBE                     yes      no     yes       no     -

The testBE will become the active boot environment for this system after reboot.


Let's take a took at our zfs list output again for rpool.

root@sandbox:/# zfs list -r rpool
NAME                        USED  AVAIL  REFER  MOUNTPOINT
rpool                      5.64G  9.98G  36.5K  /rpool
rpool/ROOT                 3.64G  9.98G    21K  legacy
rpool/ROOT/s10x_u8wos_08a   682K  9.98G  3.64G  /
rpool/ROOT/testBE          3.64G  9.98G  3.64G  /
rpool/ROOT/testBE@testBE    154K      -  3.64G  -
rpool/dump                 1.00G  9.98G  1.00G  -
rpool/export                 44K  9.98G    23K  /export
rpool/export/home            21K  9.98G    21K  /export/home
rpool/swap                    1G  11.0G    16K  -
root@sandbox:/#

Notice that the dataset rpool/ROOT/s10x_u8wos_08a is only using 682K space now and rpool/ROOT/testBE is now using 3.64G. 

The origin properties for these datasets have also changed as shown below:


root@sandbox:/# zfs get origin rpool/ROOT/testBE
NAME               PROPERTY  VALUE   SOURCE
rpool/ROOT/testBE  origin    -       -
root@sandbox:/#
root@sandbox:/# zfs get origin rpool/ROOT/s10x_u8wos_08a
NAME                       PROPERTY  VALUE                     SOURCE
rpool/ROOT/s10x_u8wos_08a  origin    rpool/ROOT/testBE@testBE  -
root@sandbox:/#

This basically means that a zfs promote operation has been created out and the orignal root dataset rpool/ROOT/s10x_u8wos_08a as been replaced by the clone dataset rpool/ROOT/testBE and this will be the now root file system after reboot and once rpool/ROOT/testBE is the active root file system we can delete rpool/ROOT/s10x_u8wos_08a and associated datasets as they'll no longer be needed unless we want to rollback to the previous BE.

I've not been able to drill down on what exactly what happens during the reboot which propells an activated BE as the root file system automatically. I'll definitely write on that if I'm able to ascertain the details.

I hope this article has been helpful in understanding the live upgrade process beyond lucreate, luupgrade and luactivate commands.

Tuesday, 20 June 2017

Getting started with IAM

Identity and Access Management is where you manage your AWS users and their access to AWS services. IAM is speciifically used to manage users, groups, access policies and roles.
The "root" user is created when you first create the AWS account and has full access to every part of the account and subsequent users created later have no access to any AWS service by defaultt. Access is granted to them via the use of access policies.

To start working with IAM click on IAM under the Security, Identity and Compliance section of the AWS services dashboard. You will be presented with the below screen:


IAM users sign in link is the URL which will be used by users we create using IAM to authenticate and log in to AWS to gain access to services. If we want to customize the URL to something more user friendly we can use Route 53 perhaps to create a CNAME record for this URL.

Next, I'd like to point you to the section where it says Security Status. This is somewhat like the best practices guidelines by AWS for securing our AWS account. Ideally all the items should be set to green for us to adhere to best practices.

I've briefly described these items below:


  • The "delete your root access keys" item is already marked green because no root access keys were created when I created the account. 
  • MFA means multi-factor authentication and should be set up for the root account. The MFA device being used could be virtual like a compatible app installed on a device or a hardware device that might be provided by AWS themselves.
  • The "create idnividual IAM users" item is marked orange because I haven't created any IAM users yet. We should avoid using the root account sign in unless absolutely neccessery and prefer using individual IAM user accounts for doing our work.
  • The next item "use groups to assign permissions" is also marked orange becuase I haven't created any groups yet. Best practices dictate that permissions or access policies should be associated with groups and not individual users. This allows for ease of management.
  • The last item is "apply IAM password policy". This helps to apply certain rules for setting user passwords and allows us to enforce strong passwords.



Now, let's create an IAM user.
To do so expand the "create individual users" item.


Click on manage users. This will bring up the below screen:


From here click on add user. We will be presented with the below screen where we can specify the user name and select an access type:


I've specified the user name as sahil and selected the acces type as management console access. The second option "pragmmatic access" creates an access key which we can insert witthin API calls for communication between applications and other AWS services.
I've opted to give a password of my choice instead of an autogenereated password and I've unchecked the box to force the user to change their password on first login.
Once done click on "next: permissions".

Here we define what level of access will be granted to the user that we are creating. We will be presented with the below set of options:



We could add the user to a group and in doing so apply the access policies on the group to the user but since I don't have any groups created yet I'll attach a policy to the user directly by selecting "attach existing policies directly". Note that doing so is a deviation from best practices.

Selecting "attach existing policies directly" option opens up the below list:



Here we can select an existing access policy, search for and filter the acces spolicies by typing in search box and then select the required policy. For example if I wanted to grant a user access to the EC2 service I could type ec2 in the search box and select the required policy from the results. We can also create custom policies by clicking on create policy.
From the list of available policies I've selected the first one "AdministratorAccess" and this will grant full administrative rights to every available AWS service to the user. Once you've selectedd the required permissions click on Next: Review.

This opens up a review page where we can basically review our selections:


From here click on create user. This will create tthe user and displays the below page where we can see that our user has been created and also provides a sign in link which the new user will use to login to and access AWS.



I've clicked on the sign in link and the below login page is displayed.



Once I enter my credentials and click on sign in I'll be logged in as the user sahil to the AWS management console.

After logging in  I expanded on my user name to confirm that the user sahil is an IAM user and under the recently visited services I can observe that IAM service was recently used as shown in the below screenshot:



Before wrapping up the article I'd like to briefly touch upon the implementation of password policies.

From the IAM dashboard, under the security status section expand the item "Apply an IAM password policy" and click on the manage password policy button.



This will bring the below page where we can check mark our selections to strenghten our password policy.


Once the required selections have been made click on apply password policy. If we need to make any modifcations then we can delete the existing password policy and create a new one.


I've written extensivley about IAM users in this article but haven't really expanded much on groups and roles. I found groups to be somewhat analogous to the groups we have in UNIX/Linux wherein users part of the group have the same privileges as the group itself. So in case of IAM groups, an access policy appllied to the group will also be applicable to all members of the group. Roles is something interesting. Roles provide the ability to grant rights to particular AWS serivces to interact with other AWS services. For example, we can create a role to allow EC2 services to able to interact and work with Amazon S3 service.

Using capture groups in grep in Linux

Introduction Let me start by saying that this article isn't about capture groups in grep per se. What we are going to do here with gr...