Tuesday, 30 July 2019

While loops over ssh: solving problems

Introduction

A simple task that we may have performed infinite times perhaps is to put a list of servers in a file and then pass that list to a for loop, ssh to that list of servers in the loop and run some commands on them. When you try to do the same thing with a while loop, you think it'll work but it doesn't unless you are an ssh sensei and know some of its tricks.


The problem

When you try to feed input to a while loop from a file and try to run ssh commands on the listed servers in the while loop, the loop either hangs or it works on the first server i.e. runs one iteration and then stops. The problem is that ssh reads from standard input, therefore it eats all the remaining lines. We can just connect its standard input to nowhere.

ssh $USER@$SERVER "cd ${REMOTE_PATH}; mkdir -p $i" < /dev/null

In some case re-directing stdin from /dev/null is known not to work. A simple solution to that scenario is to use ssh with -n option to redirect its stdin from /dev/null like below.


cat dev_list | tr ',' ' ' | while read INS SERVER ETC ; do echo $SERVER -- $INS; sudo ssh -o StrictHostKeyChecking=no -o "BatchMode yes"  -n -q  $SERVER "ls /tmp | grep dbstart | grep log";  [[ $? -eq 0 ]] && echo  $SERVER -- $INS >> log_found.txt; done


Here is the description of the -n option straight from the man page for ssh:

     -n      Redirects stdin from /dev/null (actually, prevents reading from stdin).  This must be used when ssh is run in the background.  A common trick is to use this to run X11 programs on a remote machine. For example, ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadows.cs.hut.fi, and the X11 connection will be automatically forwarded over an encrypted channel.  The ssh program will be put in the background.  (This does not work if ssh needs to ask for a password or passphrase; see also the -f option.)


Conclusion

We hope that you found this article to be useful and this ssh trick helps in future when you run scripts remotely via ssh over a while loop.

Thursday, 11 July 2019

Fixing ORA-27102: out of memory on Solaris 11 during DB installation

Introduction

While installing Oracle database version 12c on a Solaris 11 zone, out DB Team reported the following error:



They asked us to investigate and on checking I found that the memory utilization was very low and the sufficient memory to the tune of 120GB had been assigned to the zone.

Diagnostics and fix

We had confirmed that sufficient memory had been allocated to the zone and almost all of it was available for use. There weren't any projects created on the system by default. It then dawned on me that the database was probably trying to create a shared memory segment whose size was greater than the default size of 8GB as allocated to processes in the default project.

It then became clear that the fix was to set the value of project.max-shm-memory. To facilitate the installation, I set the parameter for the Oracle installer process.

uslabnodedb01# ps -ef|grep java
    root   684   104   0 09:13:46 pts/5       0:00 grep java
  obruce 29307 18012   0 08:52:16 pts/2       2:48 /brucedb/db/11.2.0/jdk/jre/bin/sparcv9/java -Doracle.installer.not_bootstrap=tr

I used the prctl command to set the value as shown in the below command

uslabnodedb01#  prctl -r -n project.max-shm-memory -v 123695058124 -i process 29307

Note that the size is in bytes.

Next, I verified that the value had been set.

uslabnodedb01#  prctl -n project.max-shm-memory  -i process 29307
process: 29307: /brucedb/db/11.2.0/jdk/jre/bin/sparcv9/java -Doracle.installer.not_boo
NAME    PRIVILEGE       VALUE    FLAG   ACTION                       RECIPIENT
project.max-shm-memory
        privileged       115GB      -   deny                                 -
        system          16.0EB    max   deny                                 -

To make the changes permanent, I edited the /etc/project file entry for the default project and set the project.max-shm-memory value there. After making this change the DBAs confirmed that the installation went smoothly.


Conclusion

We hope that you found this quick fix to be useful and we look forward towards your suggestions and feedback.

Thursday, 4 July 2019

SSH: Use password authentication despite availability of key-pair



Introduction

In this brief article we'll talk about a request I recently received from an application team in our organization. Here's the requirement:

"We have password less authentication configured between two users but we would like to login using a password as well when we need to."

I did try to explain that if key based authentication is rejected then SSH will default to password based authentication anyway unless it's set to no in the sshd_config file. The parameter I'm talking about is PasswordAuthentication and is set to yes by default.

To facilitate this requirement we need to use the PreferredAuthentications options with the ssh command and set it's value to password.

I'll now demonstrate using this option in a practical scenario.

The setup:

I'm working on a Centos 7.6 system and have created two users test_user1 and test_user2. I've copied the public key for test_user1 over to the authorized_keys file for test_user2 to facilitate password less login.

[test_user@bolt-lab ~]$ ssh-copy-id test_user2@bolt-lab
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/test_user/.ssh/id_dsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
test_user2@bolt-lab's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'test_user2@bolt-lab'"
and check to make sure that only the key(s) you wanted were added.

The key has been copied over successfully. Now let's verify by logging in.

[test_user@bolt-lab ~]$ ssh test_user2@bolt-lab
[test_user2@bolt-lab ~]$

Now let's try to login with the PreferredAuthentications option set to password.

[test_user@bolt-lab ~]$ ssh -o PreferredAuthentications=password test_user2@bolt-lab
test_user2@bolt-lab's password:
[test_user2@bolt-lab ~]$

There you have it. This option works.

Q) Now we know that this option works for SSH but what about SFTP and SCP?
A) It does work with these as well and here is a demo to verify.

[test_user@bolt-lab ~]$ sftp test_user2@bolt-lab
Connected to bolt-lab.
sftp> ^D
[test_user@bolt-lab ~]$
[test_user@bolt-lab ~]$ sftp -o PreferredAuthentications=password test_user2@bolt-lab
test_user2@bolt-lab's password:
Connected to bolt-lab.
sftp> ^D
[test_user@bolt-lab ~]$
[test_user@bolt-lab ~]$ touch abc.txt
[test_user@bolt-lab ~]$ scp abc.txt test_user2@bolt-lab:~
abc.txt                                                                                                                  100%    0     0.0KB/s   00:00
[test_user@bolt-lab ~]$ scp -o PreferredAuthentications=password abc.txt test_user2@bolt-lab:~
test_user2@bolt-lab's password:
abc.txt                                                                                                                  100%    0     0.0KB/s   00:00
[test_user@bolt-lab ~]$


Conclusion

We hope you found this article useful and it encourages you to explore more options and flags pertaining to the SSH protocol.

Monday, 1 July 2019

Introducing Puppet Bolt


Introduction

If you've been working in the system administration field for a while chances are that you've heard of or perhaps even used Puppet which is one of the most popular configuration management tools out there. In this article we'll talk about Puppet Bolt which is basically an open source agent less remote task runner.
Why would you use Puppet Bolt over Puppet?
In order to effectively use Puppet you'll need to learn it's Domain specific Language (DSL) to write your desired configurations in. Also a significant effort for initial setup is required since you need to setup a Puppet master and install an agent on all managed nodes. In contrast to this, you need to install Bolt on just one node and you are good to go!
While Puppet involves writing desired state configurations from scratch, Bolt is meant to automate ad-hoc tasks imperatively and run existing management scripts. The main goal of Puppet Bolt is allow for faster automation in environments.
Here are a couple of examples of tasks you could use Puppet Bolt for:

  • Restart servers/services
  • Install an application like Docker
  • Install and configure MySQL

The setup:

For the purpose of this demonstration I'll be working on two virtual machines running the Centos 7 operating system. I'll be installing Bolt on one of the systems and we will be remotely managing the other system as a client.


Installing Puppet Bolt:

To install Bolt, we first need to add the required repository. This is made available by installing an rpm from Puppet which contains the required repository information.

[root@bolt-lab ~]# sudo rpm -Uvh https://yum.puppet.com/puppet6/puppet6-release-el-7.noarch.rpm
Retrieving https://yum.puppet.com/puppet6/puppet6-release-el-7.noarch.rpm
warning: /var/tmp/rpm-tmp.Mg35Yq: Header V4 RSA/SHA256 Signature, key ID ef8d349f: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:puppet6-release-6.0.0-1.el7      ################################# [100%]
[root@bolt-lab ~]# yum install puppet-bolt
Loaded plugins: fastestmirror
Determining fastest mirrors
epel/x86_64/metalink                                                                                                                |  16 kB  00:00:00
 * base: mirror.aktkn.sg
 * epel: d2lzkl7pfhq30w.cloudfront.net
 * extras: mirror.aktkn.sg
 * nux-dextop: mirror.li.nux.ro
 * updates: mirror.aktkn.sg
base                                                                                                                                | 3.6 kB  00:00:00
epel                                                                                                                                | 5.3 kB  00:00:00
extras                                                                                                                              | 3.4 kB  00:00:00
nux-dextop                                                                                                                          | 2.9 kB  00:00:00
puppet6                                                                                                                             | 2.5 kB  00:00:00
tigervnc-el7                                                                                                                        | 2.9 kB  00:00:00
updates                                                                                                                             | 3.4 kB  00:00:00
xrdp                                                                                                                                | 2.9 kB  00:00:00
(1/11): epel/x86_64/group_gz                                                                                                        |  88 kB  00:00:00
(2/11): base/7/x86_64/primary_db                                                                                                    | 6.0 MB  00:00:01
(3/11): base/7/x86_64/group_gz                                                                                                      | 166 kB  00:00:01
(4/11): epel/x86_64/updateinfo                                                                                                      | 978 kB  00:00:01
(5/11): extras/7/x86_64/primary_db                                                                                                  | 205 kB  00:00:01
(6/11): puppet6/x86_64/primary_db                                                                                                   | 126 kB  00:00:00
(7/11): tigervnc-el7/primary_db                                                                                                     | 8.7 kB  00:00:00
(8/11): updates/7/x86_64/primary_db                                                                                                 | 6.4 MB  00:00:01
(9/11): nux-dextop/x86_64/primary_db                                                                                                | 1.8 MB  00:00:02
(10/11): epel/x86_64/primary_db                                                                                                     | 6.8 MB  00:00:03
(11/11): xrdp/primary_db                                                                                                            | 1.8 MB  00:00:04
Resolving Dependencies
--> Running transaction check
---> Package puppet-bolt.x86_64 0:1.25.0-1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

===========================================================================================================================================================
 Package                                Arch                              Version                                 Repository                          Size
===========================================================================================================================================================
Installing:
 puppet-bolt                            x86_64                            1.25.0-1.el7                            puppet6                             30 M

Transaction Summary
===========================================================================================================================================================
Install  1 Package

Total download size: 30 M
Installed size: 102 M
Is this ok [y/d/N]: y
Downloading packages:
warning: /var/cache/yum/x86_64/7/puppet6/packages/puppet-bolt-1.25.0-1.el7.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID ef8d349f: NOKEY --:--:-- ETA
Public key for puppet-bolt-1.25.0-1.el7.x86_64.rpm is not installed
puppet-bolt-1.25.0-1.el7.x86_64.rpm                                                                                                 |  30 MB  00:00:00
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-puppet6-release
Importing GPG key 0xEF8D349F:
 Userid     : "Puppet, Inc. Release Key (Puppet, Inc. Release Key) <release@puppet.com>"
 Fingerprint: 6f6b 1550 9cf8 e59e 6e46 9f32 7f43 8280 ef8d 349f
 Package    : puppet6-release-6.0.0-1.el7.noarch (installed)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-puppet6-release
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : puppet-bolt-1.25.0-1.el7.x86_64                                                                                                         1/1
  Verifying  : puppet-bolt-1.25.0-1.el7.x86_64                                                                                                         1/1

Installed:
  puppet-bolt.x86_64 0:1.25.0-1.el7

Complete!
[root@bolt-lab ~]#


Using Bolt to run commands on Linux servers:

Puppet Bolt supports SSH and WinRM remote management protocols and SSH is used by default. If you wish to use WinRM, you would need to specify it in the --nodes string for Windows nodes.

Given below is the syntax for running a command on a remote system using Bolt:

bolt command run <COMMAND> --nodes <NODE> --user <USER> --password <PASSWORD>

In case you are connecting to a new host, you might want to ignore the host key check performed by ssh. To do so add the --no-host-key-check option with the bolt command. 
To know the list of available options for the 'bolt command run' type:

bolt command run --help


Example 1: Execute a command on a remote host

As our first example, let's run the uptime command on a remote host.

[sahil@bolt-lab ~]$ bolt command run 'uptime' --nodes 10.31.20.93 --user sahil
Started on 10.31.20.93...
Finished on 10.31.20.93:
  STDOUT:
     05:28:48 up  1:02,  2 users,  load average: 0.00, 0.01, 0.05
Successful on 1 node: 10.31.20.93
Ran on 1 node in 0.64 seconds
[sahil@bolt-lab ~]$

Note: By default Bolt seems to execute the command as the user with which you initially logged in to the server and not the user you are currently logged in as if both are not the same. To workaround it, I added the --user option and specified the user.


Example 2: Specify user password while connecting

I'm sure you are well aware of the shortcomings of typing passwords in plain text on the command line. But let's assume that in a dire situation you have to type it in then Bolt allows you to do that using the --password flag. If you are typing in the password on the command line I assume that you've not added the host fingerprint to your known_hosts file on the source host so we add the --no-hot-key-check option as well.

[root@bolt-server ~]# bolt command run '/sbin/ip addr show ' --nodes 10.31.19.151 --no-host-key-check --user sahil --password B0lT_Te$t
Started on 10.31.19.151...
Finished on 10.31.19.151:
  STDOUT:
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 06:2a:36:c5:1c:fe brd ff:ff:ff:ff:ff:ff
        inet 10.31.19.151/20 brd 10.31.31.255 scope global noprefixroute dynamic eth0
           valid_lft 2386sec preferred_lft 2386sec
        inet6 2406:da18:77c:6102:568c:32c8:cdf5:b5b2/128 scope global noprefixroute dynamic
           valid_lft 439sec preferred_lft 139sec
        inet6 fe80::42a:36ff:fec5:1cfe/64 scope link noprefixroute
           valid_lft forever preferred_lft forever
Successful on 1 node: 10.31.19.151
Ran on 1 node in 0.63 seconds

In case you do not want to type the password with the Bolt command, you could simply not type anything after the --password option and press enter. When you do this Bolt will ask you for the password for each of the destination nodes you are trying to execute commands on.


Example 3: Execute multiple commands on multiple hosts

If you need to execute more than one command then you could do so like you would in a normal ssh session i.e enclose the commands in quotes and separate them via semicolons. To execute the command on multiple nodes type in the host names or IP addresses separated by a comma preceded by the --nodes option. Given below is an example.

[sahil@bolt-lab ~]$ bolt command run 'id -a;uptime' --nodes 10.31.20.93,10.31.19.151 --user sahil
Started on 10.31.20.93...
Started on 10.31.19.151...
Finished on 10.31.19.151:
  STDOUT:
    uid=1004(sahil) gid=1006(sahil) groups=1006(sahil) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
     05:37:29 up  1:04,  3 users,  load average: 0.00, 0.01, 0.05
Finished on 10.31.20.93:
  STDOUT:
    uid=1004(sahil) gid=1006(sahil) groups=1006(sahil) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
     05:37:29 up  1:11,  2 users,  load average: 0.00, 0.01, 0.05
Successful on 2 nodes: 10.31.20.93,10.31.19.151
Ran on 2 nodes in 0.70 seconds
[sahil@bolt-lab ~]$


That sounds great but what if the command I needed to run had quotes in it?
Well, Bolt handles that quite well just like a regular ssh session. Here's an example.

[sahil@bolt-lab ~]$ bolt command run "df -h | grep '^/'" --nodes 10.31.20.93,10.31.19.151 --user sahil
Started on 10.31.20.93...
Started on 10.31.19.151...
Finished on 10.31.19.151:
  STDOUT:
    /dev/xvda1       20G  6.3G   14G  32% /
Finished on 10.31.20.93:
  STDOUT:
    /dev/xvda1       20G  6.4G   14G  32% /
Successful on 2 nodes: 10.31.20.93,10.31.19.151
Ran on 2 nodes in 0.68 seconds
[sahil@bolt-lab ~]$


Example 4: Using short hand command options

If you like everyone else prefer to avoid typing something that you don't have to then you'll be happy to know that the command line options we've just discussed have short hands i.e single character alternatives like in many Linux commands. Here is an example using the short hands:

[sahil@bolt-lab ~]$ bolt command run "date" -n 10.31.19.151 -u sahil -p
Please enter your password:
Started on 10.31.19.151...
Finished on 10.31.19.151:
  STDOUT:
    Mon Jul  1 06:24:51 UTC 2019
Successful on 1 node: 10.31.19.151
Ran on 1 node in 0.65 seconds


Example 5: Running Bolt commands with sudo

A remote task runner would have very limited functionality if it didn't allow users to run commands with escalated privileges i.e. using the power of root. We can use the --run-as option with Bolt to specify that we would like to run a given command as the root user. To demonstrate let's restart the nfs service on our remote host.

[sahil@bolt-lab ~]$ bolt command run "systemctl restart nfs" -n 10.31.19.151 -u sahil --run-as root
Started on 10.31.19.151...
Finished on 10.31.19.151:
Successful on 1 node: 10.31.19.151
Ran on 1 node in 1.00 seconds

Needless to say that the user sahil needs to have sudo access defined in the sudoers file in order to be able to escalate privileges. In case you've not set the NOPASSWD attribute for the user in the sudoers file you could use the --sudo-password option and specify the password after the option itself or leave it blank to be prompted for a password during command execution.


Example 6: Executing scripts on remote machines

We can use Bolt to execute scripts on remote machines. These scripts could be written in any language that the remote machine can understand and interpret. The way this works is that during the run, Bolt copies the script in the /tmp directory on the remote host, executes the script and then deletes it.
To demonstrate we'll be executing the below Perl script on a host.

[sahil@bolt-lab ~]$ cat test.pl
#!/bin/perl -w
#
$my_system_name=`uname -n`;

print "System name is: $my_system_name\n";

print "Server uptime is:\n";
system("uptime");


To execute this script we will use the below command: 

[sahil@bolt-lab ~]$ bolt script run test.pl --nodes bolt-lab --user sahil
Started on bolt-lab...
Finished on bolt-lab:
  STDOUT:
    System name is: bolt-lab

    Server uptime is:
     06:46:54 up  2:14,  2 users,  load average: 0.04, 0.09, 0.10
Successful on 1 node: bolt-lab
Ran on 1 node in 1.46 seconds


Example 7: Uploading files with Bolt

Bolt allows users to upload a file to multiple remote nodes at a given destination and a given name. Here is an example of uploading the Perl script we had executed in a previous example.

[sahil@bolt-lab ~]$ bolt file upload /home/sahil/test.pl /tmp/test_new.pl --nodes 10.31.20.93 --user sahil
Started on 172.31.20.93...
Finished on 172.31.20.93:
  Uploaded '/home/sahil/test.pl' to '172.31.20.93:/tmp/test_new.pl'
Successful on 1 node: 172.31.20.93
Ran on 1 node in 1.29 seconds

As of this writing Bolt only allows users to upload files and a download option is not available but is probably in the works.


Conclusion

This concludes our basic 'getting started' with Bolt. In the next few posts we'll be exploring some interesting options pertaining to nodes and also understanding how to run tasks with Puppet Bolt.

Using capture groups in grep in Linux

Introduction Let me start by saying that this article isn't about capture groups in grep per se. What we are going to do here with gr...