Monday, 25 April 2016

Installing java in Solaris on SPARC 64 bit architecture

This is a short post to describe installation of jdk/jre 1.6 in Solaris 10 on SPARC 64 bit platform.

Please note that java 1.6 is an older version & you'd need a support contract to download it.

Once you have downloaded the iso run the following command to extract it:

bash-3.2# unzip p9553040_160_SOLARIS64.zip
Archive:  p9553040_160_SOLARIS64.zip
  inflating: jdk-6u115-solaris-sparcv9.sh
 extracting: jdk-6u115-solaris-sparcv9.tar.Z
  inflating: jre-6u115-solaris-sparcv9.sh
  inflating: readme.txt

bash-3.2# ls
jdk-6u115-solaris-sparcv9.tar.Z
bash-3.2# zcat jdk-6u115-solaris-sparcv9.tar.Z  | tar -xf -

Once the file has been extracted it'll give us two packages:

One for JRE & other for JDK

bash-3.2# ls
jdk-6u115-solaris-sparcv9.tar.Z  SUNWj6dvx                        SUNWj6rtx

bash-3.2# pkgadd -d . SUNWj6dvx SUNWj6rtx

Processing package instance <SUNWj6dvx> from </java/123>

JDK 6.0 64-bit Dev. Tools (1.6.0_115)(sparc) 1.6.0,REV=2006.11.29.04.58
Copyright (c) 1995, 2016, Oracle and/or its affiliates. All rights reserved.
Using </usr> as the package base directory.
## Processing package information.
## Processing system information.
## Verifying package dependencies.
WARNING:
    The <SUNWj6dev> package "JDK 6.0 Dev. Tools (1.6.0)" is
    a prerequisite package and should be installed.
WARNING:
    The <SUNWj6rtx> package "JDK 6.0 64-bit Runtime Env.
    (1.6.0)" is a prerequisite package and should be
    installed.

Do you want to continue with the installation of <SUNWj6dvx> [y,n,?] y
## Verifying disk space requirements.

Processing package instance <SUNWj6rtx> from </java/123>

JDK 6.0 64-bit Runtime Env. (1.6.0_115)(sparc) 1.6.0,REV=2006.11.29.04.58
Copyright (c) 1995, 2016, Oracle and/or its affiliates. All rights reserved.
Using </usr> as the package base directory.
## Processing package information.
## Processing system information.
   6 package pathnames are already properly installed.
## Verifying package dependencies.
WARNING:
    The <SUNWj6rt> package "JDK 6.0 Runtime Env. (1.6.0)"
    is a prerequisite package and should be installed.

Do you want to continue with the installation of <SUNWj6rtx> [y,n,?] y
## Verifying disk space requirements.
-------------------------------------------------------output truncated

To verify the install you can check the installed version of java with the following command:

bash-3.2# pwd
/usr/jdk/instances/jdk1.6.0/bin/sparcv9
bash-3.2# ./java -fullversion
java full version "1.6.0_115-b12"

Mount a iso without using lofiadm

We usually mount an iso in Solaris using lofiadm.

Apparently we can skip the lofiadm steps & mount the iso directly as an hsfs file system as follows:

bash-3.2# pwd
/iso
bash-3.2# ls
sol-10-u9-ga-sparc-dvd.iso
bash-3.2#

To mount the iso run the following command:

mount -F hsfs /iso/sol-10-u9-ga-sparc-dvd.iso /mnt

The 'df -h' output will look like as follows:

/iso/sol-10-u9-ga-sparc-dvd.iso
                       2.1G   2.1G         0K   100%    /mnt

Fix for NTP error "No association ID's returned"

So today I came across an error wherein I was unable to get NTP running.

I started the service via svcadm like this:

# svcadm restart network/ntp

When I ran 'ntpq -p' to check the status I got the below error:

# ntpq -p
No association ID's returned

The configuration file looked good & had entries for the relevant NTP servers.

# cat /etc/inet/ntp.conf
server ntp1.vodafone.com.au
server ntp2.vodafone.com.au
server ntp3.vodafone.com.au
server ntp4.vodafone.com.au
server ntp5.vodafone.com.au

I then did a manual sync with one of the NTP server listed in the file & that did the trick.

# ntpdate -u ntp1.vodafone.com.au
25 Apr 14:55:05 ntpdate[49624]: adjust time server 10.105.138.4 offset 0.007227 sec

After the manual sync all five NTP servers were detected & were showing up in the output of ntpq -p:

#  ntpq -p
     remote           refid      st t when poll reach   delay   offset    disp
====================================================================
-ntp1.vodafone.c .GPS.            1 u   56   64   77     0.34    2.185  376.19
-ntp2.vodafone.c .GPS.            1 u   56   64   77    13.17    2.123  376.25
*ntp3.vodafone.c .GPS.            1 u   56   64   77    17.26    2.234  376.25
+ntp4.vodafone.c .GPS.            1 u   56   64   77    19.07    2.289  376.19
+ntp5.vodafone.c .GPS.            1 u   56   64   77    46.33    2.268  376.21

Getting started with CFEngine part 5 (Distributed node management & explaining desired state configuration)


In the last tutorial I created the policy my.cf to create a file named hello-world in /tmp folder.
But that was only on my policy server. I briefly illustrated how to distribute that across clients.
Here is a detailed example:

Copy the policy file to the /var/cfengine/masterfiles directory:

cp my.cf /var/cfengine/masterfiles

Edit the /var/cfengine/masterfiles/promises.cf file.
Note: use vim instead of vi as it makes spotting syntactical errors easier.

Modify the /var/cfengine/masterfiles/promises.cf file and insert the bundle name my_test in the bundlesequence in body common control. By doing so it will look something like this:
Don't forget to put a comma after the bundle name.
Include the my.cf in the inputs section of body common control in promises.cf. By doing so it will look something like this:
Remember to put it in double quotes followed by a comma.
With this done every time a cf-agent running on client contacts the policy server the policy will be executed.

In the basics tutorial I mentioned quite a few times about attaining desired state. I'll try to explain this based on our my.cf policy file example.
When the policy was executed on sever & client the hello world file was created on the server & client. Now I removed the file.
[root@dockertest tmp]# rm hello-world
rm: remove regular empty file ‘hello-world’? y
[root@dockertest tmp]# ls
cfengine-nova-3.7.3-1.x86_64.rpm  edit_motd_helloworld.cf  hsperfdata_root  my.cf  promises.cf  redis.sock
[root@dockertest tmp]# date
Sun Apr 24 15:04:21 EDT 2016

After a few minutes when I checked,  wow the file was there again!
[root@dockertest tmp]# ls
cfengine-nova-3.7.3-1.x86_64.rpm  edit_motd_helloworld.cf  hello-world  hsperfdata_root  my.cf  promises.cf  redis.sock
[root@dockertest tmp]# date
Sun Apr 24 15:13:12 EDT 2016
[root@dockertest tmp]# ls -l hello-world
-rw-------. 1 root root 0 Apr 24 15:06 hello-world
[root@dockertest tmp]#

This happened because when the cf-agent synced up after the 5 minute interval it detected a policy deviation from promises.cf & automatically executed the policy again thereby attaining the desired state of configuration again.
This is really useful when we want to monitor & protect files against unauthorized deletion.

Sunday, 24 April 2016

Getting started with CFEngine part 4 (writing the first policy)


Ok, so we've installed the policy server & client & tested out some of the commands.
Now lets write a policy.

The first policy would have to say 'hello world'.

Given below is a small policy file my.cf:

[root@dockertest tmp]# cat my.cf
body common control
{
bundlesequence => { "my_test" };
}
bundle agent my_test{
 files:
  linux::
   "/tmp/hello-world"
     create => "true";
}

The only mandatory element in this section is bundlesequence, which tells CFEngine which bundles  to execute and in which order. For the above example policy, we will have a single bundle my_test executed:

body common control
{
bundlesequence => { "my_test" };
}

The example says to create a file /tmp/hello-world on all Linux hosts.

To run a syntax check run the following command:

[root@dockertest tmp]# cf-promises -f ./my.cf
[root@dockertest tmp]#

To execute the policy type:

[root@dockertest tmp]# cf-agent -KI -f ./my.cf
    info: Created file '/tmp/hello-world', mode 0600
[root@dockertest tmp]#
[root@dockertest tmp]# ls -l /tmp/hello-world
-rw-------. 1 root root 0 Apr 24 13:52 /tmp/hello-world
[root@dockertest tmp]#

To run the policy on a distributed system:
By default cf-serverd will serve policy from the /var/cfengine/masterfiles directory. Upon updates, cf-agent will be notified and start to download these before executing them locally.
This means that by default you should store all your policies in the /var/cfengine/masterfiles directory on your policy server. So, now let’s copy our policy to this location:
cp /tmp/my.cf /var/cfengine/masterfiles/my.cf
1. Modify the /var/cfengine/masterfiles/promises.cf file and insert the bundle name my_test in the bundlesequence in body common control. 
2. Include the my.cf  in the inputs section of body common control in promises.cf. 
Save the file, and you are done!

Getting started with CFEngine part 3 (some useful commands)



This is a brief tutorial about some easy & useful CFEngine commands.

To view the installed version of CFEngine:


[root@dockertest init.d]# /var/cfengine/bin/cf-agent -V
CFEngine Core 3.7.3

CFEngine Enterprise 3.7.3

Keys are necessary when operating in a distributed CFEngine environment. The below command also sets up under /var/cfengine/ the basic directory structure used by CFEngine. To set up the keys type:

[root@dockertest ~]# /var/cfengine/bin/cf-key
A key file already exists at /var/cfengine/ppkeys/localhost.pub

Bootstraping:

Bootstraping the agent means copying the masterfiles to their final working location in /var/cfengine/inputs/ and starting the base cf-execd daemon. This process controls the periodic execution of cf-agent, which is the one that actually executes the promises in the provided policies 

[root@cfeclient tmp]# /var/cfengine/bin/cf-agent --bootstrap --policy-server 192.168.44.179
 warning: Deprecated bootstrap options detected. The --policy-server (-s) option is deprecated from CFEngine community version 3.5.0.Please provide the address argument to --bootstrap (-B) instead. Rewriting your arguments now, but you need to adjust them as this support will be removed soon.
  notice: Bootstrap mode: implicitly trusting server, use --trust-server=no if server trust is already established
R: Bootstrapping from host '192.168.44.179' via built-in policy '/var/cfengine/inputs/failsafe.cf'
R: This autonomous node assumes the role of voluntary client
R: Updated local policy from policy server
R: Did not start the scheduler
  notice: Bootstrap to '192.168.44.179' completed successfully!

To check the status of cfagent run the following command:

[root@dockertest init.d]# ./cfengine3 status
cf-consumer (pid 4875 4874 4873 4872 4871 4870 4869 4868 4867 4866 4865 4864 4863 4862 4861 4860 4859 4858 4857 4856 4855 4854 4853 4852 4851 4848) is running...
cf-hub (pid 4906) is running...
redis-server (pid 4841) is running...
httpd (pid 4986 4985 4984 4983 4982 4833) is running...
postgres is not running
cf-execd (pid 4964) is running...
cf-serverd (pid 4970) is running...
cf-monitord (pid 4976) is running...


To restart the agent type:

[root@dockertest init.d]# ./cfengine3 restart
Shutting down runalerts.sh
Starting CFEngine postgresql-hub:
waiting for server to start.... done
server started

Starting CFEngine httpd: httpd: Could not reliably determine the server's fully qualified domain name, using 192.168.44.179 for ServerName

Starting redis-server...
Starting cf-consumer...                                    [  OK  ]
Starting cf-hub...                                         [  OK  ]
Starting cf-execd...                                       [  OK  ]
Starting cf-serverd...                                     [  OK  ]
Starting cf-monitord...                                    [  OK  ]
[root@dockertest init.d]#                                  [  OK  ]


To print verbose information about the behavior of the agent type:

cf-agent -v
------------------ output truncated
 verbose: Diff is empty, nothing to save at '/var/cfengine/state/diff/lastseen.diff'
 verbose: Diff is empty, nothing to save at '/var/cfengine/state/diff/software.diff'
 verbose: Diff is empty, nothing to save at '/var/cfengine/state/diff/patch.diff'
 verbose: No lock purging scheduled
 verbose: Outcome of version CFEngine Promises.cf 3.7.3 (agent-0): Promises observed - Total promise compliance: 100% kept, 0% repaired, 0% not kept (out of 212 events). User promise compliance: 100% kept, 0% repaired, 0% not kept (out of 192 events). CFEngine system compliance: 100% kept, 0% repaired, 0% not kept (out of 20 events).

Getting started with CFEngine part 2 (installing the policy server & client)


For the purpose of this installation I've used CFEngine version 3.7.3 which is the latest version available as of this writing.

Installing the policy server (hub):

The packages for installation can be downloaded from the CFEngine official website.

On the website CFEngine recommends to use the 'quick start' approach for Linux distributions which I followed for my installation.
The version of CFEngine being used here is the full version of CFEngine Enterprise, but the number of hosts (nodes) is limited to 25.
System requirements:

CFEngine
Policyserver
64-bit machine with a recent version of Linux.
2 GB of memory, and 100mb of disk space per host you plan to connect to.
Port 5308 needs to be open. Hostname must be set

Download and Install CFEngine Policyserver:
Run the following command to download and automatically install CFEngine on a fresh 64-bit Linux machine

wget http://s3.amazonaws.com/cfengine.packages/quick-install-cfengine-enterprise.sh  && sudo bash ./quick-install-cfengine-enterprise.sh hub

The above command although looks simple but will fail miserably if the pre-requisites are not in place.

Ensure that your /etc/hosts file is populated appropriately else the install will fail with the following error:

HTTP request sent, awaiting response... 200 OK
Length: 46561674 (44M) [application/x-redhat-package-manager]
Saving to: ‘cfengine-nova-hub-3.7.3-1.x86_64.rpm’

100%[====================================================================================================================================================>] 46,561,674  61.8KB/s   in 16m 25s

2016-04-24 00:43:57 (46.2 KB/s) - ‘cfengine-nova-hub-3.7.3-1.x86_64.rpm’ saved [46561674/46561674]

hostname: Name or service not known
hostname -f does not return a valid name, but this is a requirement for generating a
SSL certificate for the Mission Portal and API.
Please make sure that hostname -f returns a valid name (Add an entry to /etc/hosts or
fix the name resolution).
error: %pre(cfengine-nova-hub-3.7.3-1.x86_64) scriptlet failed, exit status 1
error: cfengine-nova-hub-3.7.3-1.x86_64: install failed

Next, you need to have some dependencies installed else the install fails giving mysterious errors liek this:

2016-04-24 01:29:38 (54.0 KB/s) - ‘cfengine-nova-hub-3.7.3-1.x86_64.rpm.1’ saved [46561674/46561674]

error: unpacking of archive failed on file /var/cfengine/bin/pg_dump;571c59c2: cpio: read failed - No such file or directory
error: cfengine-nova-hub-3.7.3-1.x86_64: install failed

The following are the pre-requisite packages:
  1. openssl 
  2. openssl-devel 
  3. flex 
  4. pcre 
  5. pcre-devel 
  6. openldap 
  7. gcc 
  8. tokyocabinet


In my case the installation still failed with 'pg_dump;571c59c2: cpio: read failed". I later found out that it was a backup utility related to PostgresSQL. So I installed it.
After this the installation finally succeeded:


[root@dockertest ~]# rpm -ivh cfengine-nova-hub-3.7.3-1.x86_64.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:cfengine-nova-hub-3.7.3-1        ################################# [100%]

I didn't use the script this time because the script detects the distro & donloads & installs the rpm.
Since I already had the rpm downloaded I skipped the script.

Once installed, the policyserver needs to bootstrap to itself. Run the bootstrap command as follows:
/var/cfengine/bin/cf-agent --bootstrap <IP address>
[root@dockertest ~]# /var/cfengine/bin/cf-agent --bootstrap 192.168.44.179
R: Bootstrapping from host '192.168.44.179' via built-in policy '/var/cfengine/inputs/failsafe.cf'
R: This host assumes the role of policy server
R: Updated local policy from policy server
R: Started the server
R: Started the scheduler
notice: Bootstrap to '192.168.44.179' completed successfully!


Installing the CFEngine client:

System requirements:
CFEngine Hosts (clients)
32/64-bit machines with a recent version of Linux.
20 mb of memory, and 20mb of disk space.
Port 5308 needs to be open.

CFEngine provides a script for the client install as well which I used here:

wget http://s3.amazonaws.com/cfengine.packages/quick-install-cfengine-enterprise.sh  && sudo bash ./quick-install-cfengine-enterprise.sh agent

After the install completes you'll need to bootstrap the client to the policy server with the following command:

/var/cfengine/bin/cf-agent --bootstrap <Policy server IP address>

[root@cfeclient ~]# /var/cfengine/bin/cf-agent --bootstrap 192.168.44.179
  notice: Bootstrap mode: implicitly trusting server, use --trust-server=no if server trust is already established
  notice: Trusting new key: SHA=dd2074ca7f7d0bbf00f666eea1f0aa3a8121fa2cb924cc6e4739ccef061ebbb3
R: Bootstrapping from host '192.168.44.179' via built-in policy '/var/cfengine/inputs/failsafe.cf'
R: This autonomous node assumes the role of voluntary client
R: Updated local policy from policy server
R: Started the server
R: Started the scheduler
  notice: Bootstrap to '192.168.44.179' completed successfully!

In the next tutorial I share some useful commands & process for logging in to the Mission control GUI.


Getting started with CFEngine part 1 (The basics)


This quick start guide is based on my understanding CFEngine concepts & working.
I've tried to make it as understandable & precise as I could.

CFEngine is a configuration management & automation tool that has been around since the 90s.
It works on a variety of UNIX platforms as well as Windows.

Some of its features are:
Ensures systems have self-healing capabilities.
Convergence of systems to reach a desired state of configuration

Some low level examples include:



  •  Build new nodes
  • Deploy & manage services & applications
  • Managing databases
  • Manage ACLs

                                                                                                  
Components of CFEngine:

Cf-agent
Cf-monitord
Cf-execd
Cf-serverd

In CFEngine a desired state configuration of nodes is reached via implementation of policies.

Server-client architecture:



CFEngine is based on a server-client architecture wherein the cf-agent running on the client communicates with the cf-serverd daemon on the server (hub) for policy updates every 5 minutes.


The next part introduces some key terms used in writing configurations to be executed by CFEngine.

Promise Theory:

A Model of voluntary cooperation between individual agents who publish their intentions to one another in the form of promises.
Files & processes can make promises about their contents.

A process can make a promise to be in running state but cannot make a promise regarding its configuration.

Anatomy of a promise:

type:
 context::
       "promiser" -> "promisee"
         attribute => "value";

In the above example:

type can be files or commands.
context is a condition deciding where & when to execute the promise.
promiser is the file or process making the promise.
attribute details & constrains a promise.

Bundles:

A bundle is a logical grouping of promises that are written with the aim of achieving a common end goal. For example promises written to install, configure & start MySQL.

Anatomy of a bundle:

bundle type name{
type;
 context::
       "promiser" -> "promisee"
         attribute => "value";
}

Bundles apply to the binary that executes them. Agent bundles apply to cf-agent.

Body:

A body is a collection of attributes.

Anatomy of a body:

body type name{
         attribute1 => "value";
         attribute2 => "value";
}

In a body every attribute ends with a semi colon.


The components discussed above come together in the form of a plain text file with .cf extension called a policy. 

Small shell script to gather VCS configuration backup

Given below is an easy to use script that gathers some useful information related to VCS.

#!/usr/bin/bash
echo "gathering VCS information for server" `hostname`
mkdir -p /var/tmp/`hostname`_VCSinfo_`date "+%d/%m/%y"`
CWD=/var/tmp/`hostname`_VCSinfo_`date "+%d/%m/%y"`
cd $CWD
echo "------------------------------------------"
echo "gathering LLT & GAB information"

cat /etc/llthosts >> llthosts.txt
cat /etc/llttab >> llttab.txt
cat /etc/gabtab >> gabtab.txt
cat /etc/VRTSvcs/conf/config/main.cf >> main_cf.txt
lltstat -nvv >> lltstat.txt
gabconfig -a >> gabconfig_a.txt

echo "------------------------------------------"
echo "gathering HA information"

haclus -display >> haclus_display.txt
hauser -display >> hauser_display.txt
hasys -state >> hasys_state.txt
hasys -display >. hasys_display.txt
hastatus -summary >> hastatus_summary.txt
hagrp -display >> hagrp_display.txt
hares -display >> hares_display.txt
echo "------------------------------------------"
echo "script is complete"

Saturday, 23 April 2016

The difference between 'zpool add' & 'zpool attach'

Both sub-commands add & attach are used to configure additional storage for the zpool but are different in the way they function.

'zpool add' simply adds a vdev to a zpool.

In the below example I created a zpool called tpool & then added a vdev to it:

root@1z0822:~# zpool list tpool
NAME   SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
tpool  195M   153K  195M   0%  1.00x  ONLINE  -

root@1z0822:~# zpool add tpool /root/disk2

root@1z0822:~# zpool list tpool
NAME   SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
tpool  390M   122K  390M   0%  1.00x  ONLINE  -
root@1z0822:~#

The result was a simple concat operation.

In the second example I attached a vdev to the zpool:

root@1z0822:~# zpool list tpool
NAME   SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
tpool  195M   128K  195M   0%  1.00x  ONLINE  -

root@1z0822:~# zpool attach tpool /root/disk1 /root/disk2

root@1z0822:~# zpool list tpool
NAME   SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
tpool  195M   126K  195M   0%  1.00x  ONLINE  -
root@1z0822:~#
root@1z0822:~# zpool status tpool
  pool: tpool
 state: ONLINE
  scan: resilvered 88K in 0h0m with 0 errors on Sat Apr 23 21:30:32 2016
config:

        NAME             STATE     READ WRITE CKSUM
        tpool            ONLINE       0     0     0
          mirror-0       ONLINE       0     0     0
            /root/disk1  ONLINE       0     0     0
            /root/disk2  ONLINE       0     0     0

errors: No known data errors
root@1z0822:~#

A zpool attach operation results in the creation of a mirror.

Hope this helps clarify the difference between zpool add/attach. 

Fix for zonecfg verify 'Problem saving file'

I've created dozens of zones in the last few months but never encountered such an error with zonecfg before.

While trying to verify the zone configuration I got 'problem saving file' error:

root@global:/zones# zonecfg -z zone1
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create -b
zonecfg:zone1> set zonepath=/zones/zone1
zonecfg:zone1> verify
zone1: Problem saving file
zonecfg:zone1> exit
zone1: Problem saving file
Configuration not saved; really quit (y/[n])? y

I then tried to verify the configuration of a running zone & got the same error;

root@global:/zones# zonecfg -z zone2 verify
zone2: Problem saving file
root@global:/zones#

After some troubleshooting I realized that my /var & /tmp file systems were which were not allowing any temporary files to be created resulting in the error.

I did some housekeeping & everything went smooth thereafter.

Thursday, 21 April 2016

Fixing 'segmentation fault' error while extending a striped logical volume


The purpose of this post is twofold.

First to fix 'segmentation fault' error.
Second to partially convert a striped logical volume to a linear logical volume online.

To partially convert a striped logical volume to a linear logical volume is fairly simple.
While running lvextend command specify num_stripes (i) value equal to 1.
This makes the extended size of the LV span in a linear fashion.

Yesterday I came across an issue while extending a striped logical volume strLV1.

[root@linuxserver ~]# lvs --segment vg
  LV      VG       Attr   #Str Type    SSize
  strLV0 vg -wi-ao    4 striped 205.00G
  strLV1 vg -wi-ao    4 striped 172.00G

The LV was striped across four disks. I had to extend this volume but did not have another four disks to span it on.
So I decided to do away with the stripe & span the additional space linearly.

[root@linuxserver ~]# lvextend -i1 -L +99G /dev/mapper/vg-strLV1
  Extending logical volume strLV1 to 271.00 GB
Segmentation fault

So, I got a segmentation fault error wich was a first time.

If I tried to extend the volume keeping the stiped layout in space I would've gotten the following error:

[root@linuxserver ~]# lvextend  -L +99G /dev/mapper/vg-strLV1
  Using stripesize of last segment 1.00 MB
  Extending logical volume strLV1 to 271.00 GB
  Insufficient suitable allocatable extents for logical volume strLV1: 19420 more required

After a lot of thinking I tried to apply the fundamental meaning of 'segmentation fault' to my scenario.
The system was trying to access a location that did not exist.

I then then ran lvextend command again & this time gave the disk name after the volume name to direct the expansion on the particular disk & this fixed the issue:

'df -h' before expansion:

[root@linuxserver ~]# df -h /FS
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg-strLV1
                      170G   57G  105G  35% /FS

  Logical volume strLV1 successfully resized
[root@linuxserver ~]# lvextend -i1 -L +94G /dev/mapper/vg-strLV1 /dev/mapper/mpath106
  Extending logical volume strLV1 to 272.00 GB
  Logical volume strLV1 successfully resized

[root@linuxserver ~]# resize2fs /dev/mapper/vg-strLV1
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/mapper/vg-strLV1 is mounted on /FS; on-line resizing required
Performing an on-line resize of /dev/mapper/vg-strLV1 to 71303168 (4k) blocks.
The filesystem on /dev/mapper/vg-strLV1 is now 71303168 blocks long.

[root@linuxserver ~]# df -h /FS
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg-strLV1
                      268G   57G  198G  23% /FS

The LV will look something like this after the expansion:

[root@linuxserver ~]# lvs --segments vg
  LV      VG       Attr   #Str Type    SSize
  strLV0 vg -wi-ao    4 striped 205.00G
  strLV1 vg -wi-ao    4 striped 172.00G
  strLV1 vg -wi-ao    1 linear   99.00G


Note: This procedure is a fix for a bad situation. You will lose performance once the stripe goes linear. Under normal circumstances you should always try to keep the striped layout intact.


Wednesday, 20 April 2016

Ping response check script

This is a short script which determines if the server is live or not on the basis of ping responses.
The source machine will send 2 ICMP packets to the destination machine.
If the source machine does not receive a response it will send an email stating that the server is down.

#!/bin/bash
ping -c 2 <Destination IP address>
RESULT="$?"
if [ $RESULT != 0 ]
then
echo "<host name> is no longer responding to ping messages" | mail -s " <host name> server is down" admin@example.com
fi

Replace host name with destination server name in the above code.

How to check Console/MP logs in HP-UX


This guide will describe how to check console/MP logs from OS. 
It should be followed whenever there is any alert for event log error.

Steps: 
Go to “/var/stm/logs/os”: It’s the location where all the console/MP (FPL) logs are stored.

Check the latest fpl logs with using “slview” command.

#  slview -f fpl.log.11       [full path “/usr/sbin/diag/contrib/slview”]

The above command will show the following output, please follow the instructions:

Use the following navigation commands to display the logs. Use sequence A,1,F.

A – Alert Level  ?  Select alert level “1” ?  F – Display the logs 
[ User inputs are in bold red fonts ]
hpuxnode[os]# slview -f fpl.log.06
     Welcome to the FPL (Forward Progress Log) Viewer 1.2


   The following FPL navigation commands are available:
         D: Dump log starting at current block for capture and analysis
         F: Display first (oldest) block
         L: Display last (newest) block
         J: Jump to specified entry and display previous block
         +: Display next (forward in time) block
         -: Display previous (backward in time) block
      <cr>: Repeat previous +/- command
         ?: Display help
         q: Exit viewer

   The following event format options are available:
         K: Keyword
         R: Raw hex
         T: Text
         V: Verbose

   The following event filter options are available:
         A: Alert level
         C: Cell
         U: Unfiltered

SL (<cr>,+,-,?,F,L,J,D,K,R,T,V,A,C,U,q) > A

   Alert Level Filter:
     0: Minor Forward Progress
     1: Major Forward Progress
     2: Informational
     3: Warning
     5: Critical
     7: Fatal
     Q: Quit

For example, selecting an alert level threshold of 3
selects all events with alert levels of 3 or higher.

Please select alert level threshold:  1

Switching to alert level 1 filter.
SL (<cr>,+,-,?,F,L,J,D,K,R,T,V,A,C,U,q) >  F
7508  PM   0     *3 0x64800b1400e00000 0x0001ffffff03ff64  IOFAN_FAIL
7509                                   Mon Apr 13 09:35:42 2015
7510  MP   0      1 0x24800acc00e00000 0x000101ffffffff85  MP_BUS_DEVICE_DETACH
7511                                   Mon Apr 13 09:35:42 2015
7512  MP   0      1 0x24800acc00e00000 0x000103ffffffff85  MP_BUS_DEVICE_DETACH
7513                                   Mon Apr 13 09:35:42 2015
7514  MP   0      1 0x24800acc00e00000 0x000003ffffffff85  MP_BUS_DEVICE_DETACH
7515                                   Mon Apr 13 09:35:42 2015
7516  PDHC 0,0    2 0x54800c3900e00000 0x00000000000d000c  CELL_POWER_OFF
7517                                   Mon Apr 13 09:35:42 2015
7518  PDHC 0,4    2 0x54800c3904e00000 0x00000000000d000c  CELL_POWER_OFF
7519                                   Mon Apr 13 09:35:42 2015
7520  PDHC 0,2    2 0x54800c3902e00000 0x00000000000d000c  CELL_POWER_OFF
7521                                   Mon Apr 13 09:35:42 2015
7522  PDHC 0,6    2 0x54800c3906e00000 0x00000000000d000c  CELL_POWER_OFF
7523                                   Mon Apr 13 09:35:42 2015
7524  MP   0      1 0x24800acc00e00000 0x000001ffffffff85  MP_BUS_DEVICE_DETACH
7525                                   Mon Apr 13 09:35:42 2015
7526  CLU  0      1 0x24800b3400e00000 0x000001ffffffff8d  HIOPB_POWER_OFF
7527                                   Mon Apr 13 09:35:42 2015
7528  CLU  0      1 0x24800b3400e00000 0x000003ffffffff8d  HIOPB_POWER_OFF
7529                                   Mon Apr 13 09:35:42 2015

SL (<cr>,+,-,?,F,L,J,D,K,R,T,V,A,C,U,q) > D

7530  CLU  0      1 0x24800b3400e00000 0x000101ffffffff8d  HIOPB_POWER_OFF
7531                                   Mon Apr 13 09:35:42 2015
7532  CLU  0      1 0x24800b3400e00000 0x000103ffffffff8d  HIOPB_POWER_OFF
7533                                   Mon Apr 13 09:35:42 2015
7534  PM   0     *7 0xe4800b1e00e00000 0x0001ffffffffff64  SHUTDOWN_IOFAN
7535                                   Mon Apr 13 09:35:43 2015
7536  PM   0      2 0x4b000af800e00000 0x01000000552b8def  CABPWR_OFF
                                       Mon Apr 13 09:35:43 2015
7537  PM   0      1 0x2b000ae600e00000 0x01000000552b8def  BLOWER_SPEED_CHG_NORM
                                       Mon Apr 13 09:35:43 2015
7538  CLU  0      1 0x2b000b4100e00000 0x01000000552b8df0  SYS_BKP_POWER_OFF

Using capture groups in grep in Linux

Introduction Let me start by saying that this article isn't about capture groups in grep per se. What we are going to do here with gr...