Monday 12 August 2019

Bash: Use for loop to iterate over file without using cat

Introduction:

In this very brief post I just wanted to demonstrate a way to use the for loop to iterate over a file without using the cat command.

Here is a file containing 10 numbers, 1 number on each line.

cat seq.txt
1
2
3
4
5
6
7
8
9
10

If I wanted to loop over it using for loop in bash I could type:

for i in `cat seq.txt`; do echo $i ;done

But what if I don't want to read in the file using cat command?
I could use input redirection to read in the file contents as shown in the below example:

for i in $(<seq.txt); do echo $i ; done
1
2
3
4
5
6
7
8
9
10
                            
One more way of avoiding the cat command would be to read the file into an array and then loop over the array as shown below.

readarray -t ARR < seq.txt

for i in "${ARR[@]}"; do echo $i; done
1
2
3
4
5
6
7
8
9
10


Conclusion

There is nothing wrong with using the cat command to read in file contents while iterating over it in a for loop. I just found redirecting directly from STDIN to be more elegant and I would try to use it more often.

Friday 2 August 2019

Git cheat sheet

Introduction

This post contains a git cheat sheet that I've created for my use. The content is very succinct. I'll be writing more elaborate posts on installing and using git in the future. Also, this will be an ongoing post and will be periodically updated.


Here's the cheat sheet:


Initialize repository:

[sahil@lab-node:~/rep] $ git init .

Check for anything for ready to be staged or committed:
[sahil@lab-node:~/rep] $ git status



Staging changes:


[sahil@lab-node:~/rep] $ git add 1.bash
[sahil@lab-node:~/rep] $ git status
# On branch master
#
# Initial commit
#
# Changes to be committed:
#   (use "git rm --cached <file>..." to unstage)
#
#       new file:   1.bash
#


Commit the changes to the repo:


[sahil@lab-node:~/rep] $ git commit -m "version 1"
[master (root-commit) 252f733] version 1
 1 file changed, 10 insertions(+)
 create mode 100644 1.bash


View log of commits:


[sahil@lab-node:~/rep] $ git log
commit 252f733ded22f7634725961bfd95ecdc0826a69c
Author: Sahil Suri <sahil.suri@emerson.com>
Date:   Fri Aug 2 05:30:02 2019 +0000

    version 1


View one line logs:

[sahil@lab-node:~/rep] $ git log --oneline
3ee84d7 version 2
252f733 version 1


View the difference between commits:

[sahil@lab-node:~/rep] $ git log --graph -p
* commit 3ee84d747f009d7cbbe6f12b3eb71bd39da14220
| Author: Sahil Suri <sahil.suri@emerson.com>
| Date:   Fri Aug 2 05:32:29 2019 +0000
|
|     version 2
|
| diff --git a/1.bash b/1.bash
| index e193461..2ab8d4e 100644
| --- a/1.bash
| +++ b/1.bash
| @@ -7,4 +7,4 @@
|  #version:
|  ##############################################################
|
| -echo "This is testing version 1"
| +echo "This is testing version 2"
|
* commit 252f733ded22f7634725961bfd95ecdc0826a69c
  Author: Sahil Suri <sahil.suri@emerson.com>
  Date:   Fri Aug 2 05:30:02 2019 +0000

      version 1

  diff --git a/1.bash b/1.bash
  new file mode 100644
  index 0000000..e193461
  --- /dev/null
  +++ b/1.bash
  @@ -0,0 +1,10 @@
  +#!/bin/bash
  +
  +##############################################################
  +#Author: Sahil Suri
  +#Date:
  +#Purpose:
  +#version:
  +##############################################################
  +
  +echo "This is testing version 1"


View differences b/w current version and a particular version:


#git diff <commit hash>

[sahil@lab-node:~/rep] $ git log --oneline
c9ef444 this is version 3'
3ee84d7 version 2
252f733 version 1

[sahil@lab-node:~/rep] $ git diff 3ee84d7
diff --git a/1.bash b/1.bash
index 2ab8d4e..c2b325e 100644
--- a/1.bash
+++ b/1.bash
@@ -7,4 +7,4 @@
 #version:
 ##############################################################

-echo "This is testing version 2"
+echo "This is testing version 3"

[sahil@lab-node:~/rep] $ git diff 252f733
diff --git a/1.bash b/1.bash
index e193461..c2b325e 100644
--- a/1.bash
+++ b/1.bash
@@ -7,4 +7,4 @@
 #version:
 ##############################################################

-echo "This is testing version 1"
+echo "This is testing version 3"
[sahil@lab-node:~/rep] $


Retrieving previous versions of files:

This has a couple of steps involved:

Retrieve version of 1.bash two commits before:

[sahil@lab-node:~/rep] $ git checkout HEAD~2 1.bash

 To save this as the current version of a file, use git commit.

To go back to the updated version follow the below steps:

[sahil@lab-node:~/rep] $ git reset HEAD 1.bash
[sahil@lab-node:~/rep] $ git checkout  1.bash


Add a remote to git repository:

git remote add origin https://github.com/sahilsuri008/creating_shell_scripts_linux


To push updates to GitHub:

git push origin master


Conclusion

That's it for this post for now. Thank you for taking the time to read it.

Tuesday 30 July 2019

While loops over ssh: solving problems

Introduction

A simple task that we may have performed infinite times perhaps is to put a list of servers in a file and then pass that list to a for loop, ssh to that list of servers in the loop and run some commands on them. When you try to do the same thing with a while loop, you think it'll work but it doesn't unless you are an ssh sensei and know some of its tricks.


The problem

When you try to feed input to a while loop from a file and try to run ssh commands on the listed servers in the while loop, the loop either hangs or it works on the first server i.e. runs one iteration and then stops. The problem is that ssh reads from standard input, therefore it eats all the remaining lines. We can just connect its standard input to nowhere.

ssh $USER@$SERVER "cd ${REMOTE_PATH}; mkdir -p $i" < /dev/null

In some case re-directing stdin from /dev/null is known not to work. A simple solution to that scenario is to use ssh with -n option to redirect its stdin from /dev/null like below.


cat dev_list | tr ',' ' ' | while read INS SERVER ETC ; do echo $SERVER -- $INS; sudo ssh -o StrictHostKeyChecking=no -o "BatchMode yes"  -n -q  $SERVER "ls /tmp | grep dbstart | grep log";  [[ $? -eq 0 ]] && echo  $SERVER -- $INS >> log_found.txt; done


Here is the description of the -n option straight from the man page for ssh:

     -n      Redirects stdin from /dev/null (actually, prevents reading from stdin).  This must be used when ssh is run in the background.  A common trick is to use this to run X11 programs on a remote machine. For example, ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadows.cs.hut.fi, and the X11 connection will be automatically forwarded over an encrypted channel.  The ssh program will be put in the background.  (This does not work if ssh needs to ask for a password or passphrase; see also the -f option.)


Conclusion

We hope that you found this article to be useful and this ssh trick helps in future when you run scripts remotely via ssh over a while loop.

Thursday 11 July 2019

Fixing ORA-27102: out of memory on Solaris 11 during DB installation

Introduction

While installing Oracle database version 12c on a Solaris 11 zone, out DB Team reported the following error:



They asked us to investigate and on checking I found that the memory utilization was very low and the sufficient memory to the tune of 120GB had been assigned to the zone.

Diagnostics and fix

We had confirmed that sufficient memory had been allocated to the zone and almost all of it was available for use. There weren't any projects created on the system by default. It then dawned on me that the database was probably trying to create a shared memory segment whose size was greater than the default size of 8GB as allocated to processes in the default project.

It then became clear that the fix was to set the value of project.max-shm-memory. To facilitate the installation, I set the parameter for the Oracle installer process.

uslabnodedb01# ps -ef|grep java
    root   684   104   0 09:13:46 pts/5       0:00 grep java
  obruce 29307 18012   0 08:52:16 pts/2       2:48 /brucedb/db/11.2.0/jdk/jre/bin/sparcv9/java -Doracle.installer.not_bootstrap=tr

I used the prctl command to set the value as shown in the below command

uslabnodedb01#  prctl -r -n project.max-shm-memory -v 123695058124 -i process 29307

Note that the size is in bytes.

Next, I verified that the value had been set.

uslabnodedb01#  prctl -n project.max-shm-memory  -i process 29307
process: 29307: /brucedb/db/11.2.0/jdk/jre/bin/sparcv9/java -Doracle.installer.not_boo
NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
project.max-shm-memory
        privileged       115GB      -   deny                                 -
        system          16.0EB    max   deny                                 -

To make the changes permanent, I edited the /etc/project file entry for the default project and set the project.max-shm-memory value there. After making this change the DBAs confirmed that the installation went smoothly.


Conclusion

We hope that you found this quick fix to be useful and we look forward towards your suggestions and feedback.

Thursday 4 July 2019

SSH: Use password authentication despite availability of key-pair



Introduction

In this brief article we'll talk about a request I recently received from an application team in our organization. Here's the requirement:

"We have password less authentication configured between two users but we would like to login using a password as well when we need to."

I did try to explain that if key based authentication is rejected then SSH will default to password based authentication anyway unless it's set to no in the sshd_config file. The parameter I'm talking about is PasswordAuthentication and is set to yes by default.

To facilitate this requirement we need to use the PreferredAuthentications options with the ssh command and set it's value to password.

I'll now demonstrate using this option in a practical scenario.

The setup:

I'm working on a Centos 7.6 system and have created two users test_user1 and test_user2. I've copied the public key for test_user1 over to the authorized_keys file for test_user2 to facilitate password less login.

[test_user@bolt-lab ~]$ ssh-copy-id test_user2@bolt-lab
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/test_user/.ssh/id_dsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
test_user2@bolt-lab's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'test_user2@bolt-lab'"
and check to make sure that only the key(s) you wanted were added.

The key has been copied over successfully. Now let's verify by logging in.

[test_user@bolt-lab ~]$ ssh test_user2@bolt-lab
[test_user2@bolt-lab ~]$

Now let's try to login with the PreferredAuthentications option set to password.

[test_user@bolt-lab ~]$ ssh -o PreferredAuthentications=password test_user2@bolt-lab
test_user2@bolt-lab's password:
[test_user2@bolt-lab ~]$

There you have it. This option works.

Q) Now we know that this option works for SSH but what about SFTP and SCP?
A) It does work with these as well and here is a demo to verify.

[test_user@bolt-lab ~]$ sftp test_user2@bolt-lab
Connected to bolt-lab.
sftp> ^D
[test_user@bolt-lab ~]$
[test_user@bolt-lab ~]$ sftp -o PreferredAuthentications=password test_user2@bolt-lab
test_user2@bolt-lab's password:
Connected to bolt-lab.
sftp> ^D
[test_user@bolt-lab ~]$
[test_user@bolt-lab ~]$ touch abc.txt
[test_user@bolt-lab ~]$ scp abc.txt test_user2@bolt-lab:~
abc.txt                                                                                                                  100%    0     0.0KB/s   00:00
[test_user@bolt-lab ~]$ scp -o PreferredAuthentications=password abc.txt test_user2@bolt-lab:~
test_user2@bolt-lab's password:
abc.txt                                                                                                                  100%    0     0.0KB/s   00:00
[test_user@bolt-lab ~]$


Conclusion

We hope you found this article useful and it encourages you to explore more options and flags pertaining to the SSH protocol.

Monday 1 July 2019

Introducing Puppet Bolt


Introduction

If you've been working in the system administration field for a while chances are that you've heard of or perhaps even used Puppet which is one of the most popular configuration management tools out there. In this article we'll talk about Puppet Bolt which is basically an open source agent less remote task runner.
Why would you use Puppet Bolt over Puppet?
In order to effectively use Puppet you'll need to learn it's Domain specific Language (DSL) to write your desired configurations in. Also a significant effort for initial setup is required since you need to setup a Puppet master and install an agent on all managed nodes. In contrast to this, you need to install Bolt on just one node and you are good to go!
While Puppet involves writing desired state configurations from scratch, Bolt is meant to automate ad-hoc tasks imperatively and run existing management scripts. The main goal of Puppet Bolt is allow for faster automation in environments.
Here are a couple of examples of tasks you could use Puppet Bolt for:

  • Restart servers/services
  • Install an application like Docker
  • Install and configure MySQL

The setup:

For the purpose of this demonstration I'll be working on two virtual machines running the Centos 7 operating system. I'll be installing Bolt on one of the systems and we will be remotely managing the other system as a client.


Installing Puppet Bolt:

To install Bolt, we first need to add the required repository. This is made available by installing an rpm from Puppet which contains the required repository information.

[root@bolt-lab ~]# sudo rpm -Uvh https://yum.puppet.com/puppet6/puppet6-release-el-7.noarch.rpm
Retrieving https://yum.puppet.com/puppet6/puppet6-release-el-7.noarch.rpm
warning: /var/tmp/rpm-tmp.Mg35Yq: Header V4 RSA/SHA256 Signature, key ID ef8d349f: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:puppet6-release-6.0.0-1.el7      ################################# [100%]
[root@bolt-lab ~]# yum install puppet-bolt
Loaded plugins: fastestmirror
Determining fastest mirrors
epel/x86_64/metalink                                                                                                                |  16 kB  00:00:00
 * base: mirror.aktkn.sg
 * epel: d2lzkl7pfhq30w.cloudfront.net
 * extras: mirror.aktkn.sg
 * nux-dextop: mirror.li.nux.ro
 * updates: mirror.aktkn.sg
base                                                                                                                                | 3.6 kB  00:00:00
epel                                                                                                                                | 5.3 kB  00:00:00
extras                                                                                                                              | 3.4 kB  00:00:00
nux-dextop                                                                                                                          | 2.9 kB  00:00:00
puppet6                                                                                                                             | 2.5 kB  00:00:00
tigervnc-el7                                                                                                                        | 2.9 kB  00:00:00
updates                                                                                                                             | 3.4 kB  00:00:00
xrdp                                                                                                                                | 2.9 kB  00:00:00
(1/11): epel/x86_64/group_gz                                                                                                        |  88 kB  00:00:00
(2/11): base/7/x86_64/primary_db                                                                                                    | 6.0 MB  00:00:01
(3/11): base/7/x86_64/group_gz                                                                                                      | 166 kB  00:00:01
(4/11): epel/x86_64/updateinfo                                                                                                      | 978 kB  00:00:01
(5/11): extras/7/x86_64/primary_db                                                                                                  | 205 kB  00:00:01
(6/11): puppet6/x86_64/primary_db                                                                                                   | 126 kB  00:00:00
(7/11): tigervnc-el7/primary_db                                                                                                     | 8.7 kB  00:00:00
(8/11): updates/7/x86_64/primary_db                                                                                                 | 6.4 MB  00:00:01
(9/11): nux-dextop/x86_64/primary_db                                                                                                | 1.8 MB  00:00:02
(10/11): epel/x86_64/primary_db                                                                                                     | 6.8 MB  00:00:03
(11/11): xrdp/primary_db                                                                                                            | 1.8 MB  00:00:04
Resolving Dependencies
--> Running transaction check
---> Package puppet-bolt.x86_64 0:1.25.0-1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===========================================================================================================================================================
 Package                                Arch                              Version                                 Repository                          Size
===========================================================================================================================================================
Installing:
 puppet-bolt                            x86_64                            1.25.0-1.el7                            puppet6                             30 M

Transaction Summary
===========================================================================================================================================================
Install  1 Package

Total download size: 30 M
Installed size: 102 M
Is this ok [y/d/N]: y
Downloading packages:
warning: /var/cache/yum/x86_64/7/puppet6/packages/puppet-bolt-1.25.0-1.el7.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID ef8d349f: NOKEY --:--:-- ETA
Public key for puppet-bolt-1.25.0-1.el7.x86_64.rpm is not installed
puppet-bolt-1.25.0-1.el7.x86_64.rpm                                                                                                 |  30 MB  00:00:00
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppet6-release
Importing GPG key 0xEF8D349F:
 Userid     : "Puppet, Inc. Release Key (Puppet, Inc. Release Key) <release@puppet.com>"
 Fingerprint: 6f6b 1550 9cf8 e59e 6e46 9f32 7f43 8280 ef8d 349f
 Package    : puppet6-release-6.0.0-1.el7.noarch (installed)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-puppet6-release
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : puppet-bolt-1.25.0-1.el7.x86_64                                                                                                         1/1
  Verifying  : puppet-bolt-1.25.0-1.el7.x86_64                                                                                                         1/1

Installed:
  puppet-bolt.x86_64 0:1.25.0-1.el7

Complete!
[root@bolt-lab ~]#


Using Bolt to run commands on Linux servers:

Puppet Bolt supports SSH and WinRM remote management protocols and SSH is used by default. If you wish to use WinRM, you would need to specify it in the --nodes string for Windows nodes.

Given below is the syntax for running a command on a remote system using Bolt:

bolt command run <COMMAND> --nodes <NODE> --user <USER> --password <PASSWORD>

In case you are connecting to a new host, you might want to ignore the host key check performed by ssh. To do so add the --no-host-key-check option with the bolt command. 
To know the list of available options for the 'bolt command run' type:

bolt command run --help


Example 1: Execute a command on a remote host

As our first example, let's run the uptime command on a remote host.

[sahil@bolt-lab ~]$ bolt command run 'uptime' --nodes 10.31.20.93 --user sahil
Started on 10.31.20.93...
Finished on 10.31.20.93:
  STDOUT:
     05:28:48 up  1:02,  2 users,  load average: 0.00, 0.01, 0.05
Successful on 1 node: 10.31.20.93
Ran on 1 node in 0.64 seconds
[sahil@bolt-lab ~]$

Note: By default Bolt seems to execute the command as the user with which you initially logged in to the server and not the user you are currently logged in as if both are not the same. To workaround it, I added the --user option and specified the user.


Example 2: Specify user password while connecting

I'm sure you are well aware of the shortcomings of typing passwords in plain text on the command line. But let's assume that in a dire situation you have to type it in then Bolt allows you to do that using the --password flag. If you are typing in the password on the command line I assume that you've not added the host fingerprint to your known_hosts file on the source host so we add the --no-hot-key-check option as well.

[root@bolt-server ~]# bolt command run '/sbin/ip addr show ' --nodes 10.31.19.151 --no-host-key-check --user sahil --password B0lT_Te$t
Started on 10.31.19.151...
Finished on 10.31.19.151:
  STDOUT:
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 06:2a:36:c5:1c:fe brd ff:ff:ff:ff:ff:ff
        inet 10.31.19.151/20 brd 10.31.31.255 scope global noprefixroute dynamic eth0
           valid_lft 2386sec preferred_lft 2386sec
        inet6 2406:da18:77c:6102:568c:32c8:cdf5:b5b2/128 scope global noprefixroute dynamic
           valid_lft 439sec preferred_lft 139sec
        inet6 fe80::42a:36ff:fec5:1cfe/64 scope link noprefixroute
           valid_lft forever preferred_lft forever
Successful on 1 node: 10.31.19.151
Ran on 1 node in 0.63 seconds

In case you do not want to type the password with the Bolt command, you could simply not type anything after the --password option and press enter. When you do this Bolt will ask you for the password for each of the destination nodes you are trying to execute commands on.


Example 3: Execute multiple commands on multiple hosts

If you need to execute more than one command then you could do so like you would in a normal ssh session i.e enclose the commands in quotes and separate them via semicolons. To execute the command on multiple nodes type in the host names or IP addresses separated by a comma preceded by the --nodes option. Given below is an example.

[sahil@bolt-lab ~]$ bolt command run 'id -a;uptime' --nodes 10.31.20.93,10.31.19.151 --user sahil
Started on 10.31.20.93...
Started on 10.31.19.151...
Finished on 10.31.19.151:
  STDOUT:
    uid=1004(sahil) gid=1006(sahil) groups=1006(sahil) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
     05:37:29 up  1:04,  3 users,  load average: 0.00, 0.01, 0.05
Finished on 10.31.20.93:
  STDOUT:
    uid=1004(sahil) gid=1006(sahil) groups=1006(sahil) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
     05:37:29 up  1:11,  2 users,  load average: 0.00, 0.01, 0.05
Successful on 2 nodes: 10.31.20.93,10.31.19.151
Ran on 2 nodes in 0.70 seconds
[sahil@bolt-lab ~]$


That sounds great but what if the command I needed to run had quotes in it?
Well, Bolt handles that quite well just like a regular ssh session. Here's an example.

[sahil@bolt-lab ~]$ bolt command run "df -h | grep '^/'" --nodes 10.31.20.93,10.31.19.151 --user sahil
Started on 10.31.20.93...
Started on 10.31.19.151...
Finished on 10.31.19.151:
  STDOUT:
    /dev/xvda1       20G  6.3G   14G  32% /
Finished on 10.31.20.93:
  STDOUT:
    /dev/xvda1       20G  6.4G   14G  32% /
Successful on 2 nodes: 10.31.20.93,10.31.19.151
Ran on 2 nodes in 0.68 seconds
[sahil@bolt-lab ~]$


Example 4: Using short hand command options

If you like everyone else prefer to avoid typing something that you don't have to then you'll be happy to know that the command line options we've just discussed have short hands i.e single character alternatives like in many Linux commands. Here is an example using the short hands:

[sahil@bolt-lab ~]$ bolt command run "date" -n 10.31.19.151 -u sahil -p
Please enter your password:
Started on 10.31.19.151...
Finished on 10.31.19.151:
  STDOUT:
    Mon Jul  1 06:24:51 UTC 2019
Successful on 1 node: 10.31.19.151
Ran on 1 node in 0.65 seconds


Example 5: Running Bolt commands with sudo

A remote task runner would have very limited functionality if it didn't allow users to run commands with escalated privileges i.e. using the power of root. We can use the --run-as option with Bolt to specify that we would like to run a given command as the root user. To demonstrate let's restart the nfs service on our remote host.

[sahil@bolt-lab ~]$ bolt command run "systemctl restart nfs" -n 10.31.19.151 -u sahil --run-as root
Started on 10.31.19.151...
Finished on 10.31.19.151:
Successful on 1 node: 10.31.19.151
Ran on 1 node in 1.00 seconds

Needless to say that the user sahil needs to have sudo access defined in the sudoers file in order to be able to escalate privileges. In case you've not set the NOPASSWD attribute for the user in the sudoers file you could use the --sudo-password option and specify the password after the option itself or leave it blank to be prompted for a password during command execution.


Example 6: Executing scripts on remote machines

We can use Bolt to execute scripts on remote machines. These scripts could be written in any language that the remote machine can understand and interpret. The way this works is that during the run, Bolt copies the script in the /tmp directory on the remote host, executes the script and then deletes it.
To demonstrate we'll be executing the below Perl script on a host.

[sahil@bolt-lab ~]$ cat test.pl
#!/bin/perl -w
#
$my_system_name=`uname -n`;

print "System name is: $my_system_name\n";

print "Server uptime is:\n";
system("uptime");


To execute this script we will use the below command: 

[sahil@bolt-lab ~]$ bolt script run test.pl --nodes bolt-lab --user sahil
Started on bolt-lab...
Finished on bolt-lab:
  STDOUT:
    System name is: bolt-lab

    Server uptime is:
     06:46:54 up  2:14,  2 users,  load average: 0.04, 0.09, 0.10
Successful on 1 node: bolt-lab
Ran on 1 node in 1.46 seconds


Example 7: Uploading files with Bolt

Bolt allows users to upload a file to multiple remote nodes at a given destination and a given name. Here is an example of uploading the Perl script we had executed in a previous example.

[sahil@bolt-lab ~]$ bolt file upload /home/sahil/test.pl /tmp/test_new.pl --nodes 10.31.20.93 --user sahil
Started on 172.31.20.93...
Finished on 172.31.20.93:
  Uploaded '/home/sahil/test.pl' to '172.31.20.93:/tmp/test_new.pl'
Successful on 1 node: 172.31.20.93
Ran on 1 node in 1.29 seconds

As of this writing Bolt only allows users to upload files and a download option is not available but is probably in the works.


Conclusion

This concludes our basic 'getting started' with Bolt. In the next few posts we'll be exploring some interesting options pertaining to nodes and also understanding how to run tasks with Puppet Bolt.

Saturday 29 June 2019

Running salt-ssh as a non-root user



Introduction

Security is of the essence in every enterprise infrastructure but so is automation. One of the requirements to maintain a healthy balance among the two is to not use root directly while working with automation tools. In this article I'll be setting up salt-ssh, the agentless version of salt and work with it as a non-root user.

This is by no means a comprehensive write up on how Salt or Salt-ssh works and is rather more of a let's get started.

First let's install the tool using yum.

[root@sahil-lab ~]# yum install salt salt-ssh -y
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.aktkn.sg
 * epel: d2lzkl7pfhq30w.cloudfront.net
 * extras: mirror.aktkn.sg
 * nux-dextop: li.nux.ro
 * updates: mirror.aktkn.sg
Resolving Dependencies
--> Running transaction check
---> Package salt.noarch 0:2015.5.10-2.el7 will be installed
--> Processing Dependency: m2crypto for package: salt-2015.5.10-2.el7.noarch
--> Processing Dependency: python-crypto for package: salt-2015.5.10-2.el7.noarch
--> Processing Dependency: python-msgpack for package: salt-2015.5.10-2.el7.noarch
--> Processing Dependency: python-zmq for package: salt-2015.5.10-2.el7.noarch
--> Processing Dependency: systemd-python for package: salt-2015.5.10-2.el7.noarch
---> Package salt-ssh.noarch 0:2015.5.10-2.el7 will be installed
 ------------------------------output truncated for brevity


Now let's create the required directory structure.

[sahil@sahil-lab ~]$ mkdir salt_setup
[sahil@sahil-lab ~]$ cd salt_setup/
[sahil@sahil-lab salt_setup]$ mkdir -p {config,salt/{files,templates,states,pillar,formulas,pki/master,logs}}
[sahil@sahil-lab salt_setup]$ mkdir cache
[sahil@sahil-lab salt_setup]$ touch ssh.log


We would also need to copy the contents of /etc/salt directory to the salt_setup directory under the user's home directory.

[root@sahil-lab ~]# cp -rp /etc/salt/* /home/sahil/salt_setup/
[root@sahil-lab ~]# chown sahil:sahil -R /home/sahil/salt_setup/*


The master config file:
The master config file has the same declarations that you would define when using Salt in master mode. Create a master config file with the following contents that points Salt SSH to the location of the previously created directories.

[sahil@sahil-lab salt_setup]$ cat master
root_dir: "/home/sahil/salt_setup"
pki_dir: "pki"
cachedir: "cache"
log_file: "salt-ssh.log"
[sahil@sahil-lab salt_setup]$


The Saltfile:
The Saltfile allows you to set command line configuration option in a file instead of declaring them at runtime. Create a Saltfile with the following contents.

[sahil@sahil-lab salt_setup]$ cat Saltfile
salt-ssh:
  config_dir: "/home/sahil/salt_setup/"
  log_file: "/home/sahil/salt_setup/ssh.log"
  pki_dir: "/home/sahil/salt_setup/pki"
  cachedir: "/home/sahil/salt_setup/cache"
  roster_file: "/home/sahil/salt_setup/roster"
  ssh_wipe: True
[sahil@sahil-lab salt_setup]$


The roster file:
The roster file is used to define remote minions and their connection parameters. The default roster file has some commented out examples that you could use. I've set up a fairly simple one as shown below:

[sahil@sahilsuri0082c salt_setup]$ cat roster
# Sample salt-ssh config file
#web1:
#  host: 192.168.42.1 # The IP addr or DNS hostname
#  user: fred         # Remote executions will be executed as user fred
#  passwd: foobarbaz  # The password to use for login, if omitted, keys are used
#  sudo: True         # Whether to sudo to root, not enabled by default
#web2:
#  host: 192.168.42.2

my-salt-vm: 172.40.36.36

In the above example, my-salt-vm is the salt id of the host I wish to connect to followed by its IP address. I could've also used the server's hostname instead of the IP address.


Testing the setup


Let's use the cmd.run module to get the uptime of our host.

[sahil@sahil-lab salt_setup]$ salt-ssh  '*'  cmd.run 'uptime' --user sahil --priv /home/sahil/.ssh/id_dsa
my-salt-vm:
     05:19:32 up  2:17,  1 user,  load average: 0.15, 0.07, 0.10
[sahil@sahil-lab salt_setup]$


You might be wondering the reason for specifying the user name and key file path explicitly. If you don't salt-ssh defaults to the root user and the following happens:

sahil@sahil-lab salt_setup]$ salt-ssh  '*'  cmd.run 'uptime'
Permission denied for host my-salt-vm, do you want to deploy the salt-ssh key? (password required):
[Y/n] y
Password for root@my-salt-vm:
my-salt-vm:
    Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
[sahil@sahil-lab salt_setup]$

The '*' implies run the command on all hosts defined in the roster.
If you do not wan to specify the user name and key file path every time you connect then you could also specify them in the roster file. 
Here is an example:

cat roster | grep -v '#'

lab-node1:
  host: 172.40.36.36
  user: sahil
  priv: /home/sahil/.ssh/id_dsa
  sudo: True

With this in place you could invoke salt-ssh as shown below:

[sahil@sahil-lab salt_setup]$ salt-ssh lab-node1 cmd.run 'uptime'
lab-node1:
     06:51:28 up  3:49,  2 users,  load average: 0.00, 0.01, 0.05
[sahil@sahil-lab salt_setup]$


Salt-ssh requires Python 2.7 or 3.x to be available on the target machines. But what if you are connecting to a system that has Python version 2.6 or what if it doesn't even Python installed?
In that case you could use -r option to execute a raw shell command.

[sahil@sahil-lab salt_setup]$ salt-ssh  '*'  -r 'uptime' --user sahil --priv /home/sahil/.ssh/id_dsa
my-salt-vm:
    ----------
    retcode:
        0
    stderr:
    stdout:
         06:47:10 up  3:45,  2 users,  load average: 0.00, 0.01, 0.05
[sahil@sahil-lab salt_setup]$


Note: For invoking all salt-ssh commands being executed as non-root user, you must be in the directory where the salt configuration, roster, Saltfile and master configuration file are located.


Last words..

Salt-ssh is a nice agentless extension to the Salt tool but having worked with Ansible I find the inventory file system in Ansible coupled with the ease of setup as a whole to be much more flexible. As a result, given the option to work with salt-ssh or Ansible, I would choose Ansible. If you've worked with both tools, I'd love to hear your experience.

Merge two consecutive lines using awk

Introduction:
As system admins we spend a lot of our time working with files. While doing so we may come across situations wherein we may need to manipulate the content of a file or the output of a command to suit our needs. I came across such a situation recently wherein I had to run nslookup on a couple of hosts and get the hostname the IP address printed on the same line with a colon and a space acting as a delimiter. As with many things in UNIX/Linux there is more than one tool for the job. My task could've been accomplished using sed or perl but I chose to go with awk.

The command:

[ssuri@ulabtestpinfra09:~] $ for i in `cat<<EOF
> ulabtestdinfap31
> ulabtestdinfap35
> ulabtestdinfap37
> EOF`
> do nslookup   $i | awk '/Name|Address: 10/  {print $2}' | awk '!(NR%2){print p ": " $0 }{p=$0}'
> done
ulabtestdinfap31.example.org: 10.47.84.34
ulabtestdinfap35.example.org: 10.47.64.58
ulabtestdinfap37.example.org: 10.47.216.14
[ssuri@ulabtestpinfra09:~] $


Explanation:
As you might've noticed I've used awk twice. The first use is basic so I won't get into it. Now let's talk about the second awk. NR represents the number of rows. % is the modulus operator (i.e. a%b is the remainder when a is divided by b)... (NR%2) is the modulus of NR by two, i.e. is true when NR is even and false when odd. !(NR%2) is true when NR is odd, thus. !(NR%2){print p ": " $0 } means the program will print the line concatenated with the variable p, only on odd lines. {p=$0} means that on every line, p is set to be the current line (but only after printing the current and previous line if the current line is odd).


Conclusion:
This concludes our quick article on how we could use awk to merge or join two consecutive lines. I hope that you found this post to be useful.

Monday 17 June 2019

Lists in Python

Introduction

Lists in Python are analogous to arrays in Perl. A list holds a set of entities which could be strings or numbers. A list can in fact contain another list.
Declaring a list is fairly straight forward. Type the list name followed by the assignment operator (=) and then the list of items in square brackets separated by a comma.

>>> list=[1,2,3,4,'sahil']
>>> print list
[1, 2, 3, 4, 'sahil']
>>>

To access an individual element in the list type list_name[index]. Note that the indices start from 0 and not 1.

>>> print list[4]
sahil
>>>

Modifying lists:

There are a number of operations we can perform on lists to manipulate them. Here are a couple of examples.

Adding an element to a list:

>>> print list
[1, 2, 3, 4, 'sahil']
>>> list +=["hello"]
>>> print list
[1, 2, 3, 4, 'sahil', 'hello']
>>>


Substituting an element in the list:

>>> list=[1,2,3,4,'sahil']
>>> list[2]=9
>>> print list
[1, 2, 9, 4, 'sahil']


Replacing multiple items in a list:

>>> list[1:3]=[7,8]
>>> print list
[1, 7, 8, 4, 'sahil']
>>>
>>> list=[1, 7, 8, 4, 'sahil']
>>> list[1:2]=[2,3]
>>> print list
[1, 2, 3, 8, 4, 'sahil']
>>>


Removing multiple items in a list:

>>> list[1:3]=[]
>>> print list
[1, 4, 'sahil']
>>>


Add an element using append function:

>>> list.append('world')
>>> print list
[1, 2, 3, 8, 4, 'sahil', 'world']
>>>


Remove list element using pop function:

>>> list.pop(2)
3
>>> print list
[1, 2, 8, 4, 'sahil', 'world']
>>>


Remove list element using it's value:

>>> list.remove('sahil')
>>> print list
[1, 2, 8, 4, 'world']
>>>


Conclusion

This concludes our discussion on lists in Python. We hope that you found this quick and simple explanation to be useful.

Using capture groups in grep in Linux

Introduction Let me start by saying that this article isn't about capture groups in grep per se. What we are going to do here with gr...