Sunday 31 July 2016

Open up multiple ssh sessions while entering password only once (ssh multiplexing)

In this quick tutorial I describe how to open up multiple ssh connections to the same host while entering the password for only the first session.

To do this, create a file named config in the .ssh sub-directory under the home directory of the user & populate it with the following contents:


Here's the breakdown of the text in the file:

Host * (let this be valid for all hosts to which connections are initiated. You can specify a single host or domain or a network).

ControlMaster auto (set the control master to auto)

ControlPath ~/.ssh/master-%r@%h:%p (This specifies the path to the control socket used for connection sharing. %r denotes the remote login name, %h denotes the destination hostname & %p denotes the port number used which is port 22 by default).

To test it out, open up a ssh connection to a host. You'll be prompted for a password. Now open up another connection to the same host. This time there won't be any password prompt:


The answer to how a passwordless authentication works after the first login lies in the socket file in ~/.ssh directory.



When we did the first login a socket file got created which stored the credentials of the user. So for subsequent logins to the same host the credentials get picked from the socket file.

do note that the passwordless authentication lasts only until the first session or the master session is open.

To check if the master connection is open type:


This type of multiplexed connection set up can be very useful in situations when we need to access a system over & over but we don't have passwordless authentication set up.

Use SSH/SCP to access a remote server through an intermediate server using tunneling & port forwarding

I know the title of the post is long but I wanted the title to be accurate.
So, I have a situation wherein there are 3 servers serverA, serverB & serverC.
ServerA & ServerC can both connect to serverB but not to each other.
But if we required to access serverC from serverA or copy a file from serverC to serverA.

We can accomplish this using ssh tunneling & port forwarding.

To get the setup in place the following 2 directives must be set to yes in /etc/ssh/sshd_config file:

  • AllowTcpForwarding (Specifies whether TCP forwarding is permitted.  The available options are “yes” or “all” to allow TCP forwarding, “no” to prevent all TCP forwarding, “local” to allow local forwarding only or “remote” to allow remote forwarding only. )
  • GatewayPorts (Specifies whether remote hosts are allowed to connect to ports forwarded for the client.  By default, sshd binds remote port forwardings to the loopback address.  This prevents other remote hosts from connecting to forwarded ports.  GatewayPorts can be used to specify that sshd should allow remote port forwardings to bind to non-loopback addresses, thus allowing other hosts to connect. The argument may be “no” to force remote port forwardingsto be available to the local host only, “yes” to force remote port forwardings to bind to the wildcard address, or“clientspecified” to allow the client to select the address to which the forwarding is bound.  The default is “no”.)

On the source server i.e. serverA in our case run the following command:

ssh -L <local port>:serverC:22 serverB

The above command will establish a tunnel from serverA to serverC through serverB.
So, now if you want to connect to serverC from serverA type:

ssh localhost -p <local port>

If you want to copy a file from serverC to serverA type:

scp -P <local port> localhost:/path/to/file /path/to/save/file

Here is a cool demonstration on 3 centOS 7 machines:

From my source machine I create a tunnel to 192.168.44.131 via 192.168.44.132 using port forwarding at my local port 9191:


Now with the above command we are logged in to 192.168.44.132 & the tunnel has been established.

To check if port forwarding is working, look for the port 9191 in netstat output:


We can infer from the above output that the ssh service is listening on the local port 9191.

Now, to connect to 192.168.44.131 which is our serverC in this example:


That's it & we're logged in!

To test the scp transfer through the tunnel, lets copy a file:


SSH to a Linux machine from chrome browser

Yes, it's possible. We can in fact login to a linux server from a browser using ssh.

Open up a chrome browser & in the chrome web store search for secure shell.


Click on 'add to chrome' this will download the app.
Once the download completes it'll open up a new tab in the browser & you'll be to see the ssh app in the apps section.


Now just click on the secure shell icon & it will launch a window where you can enter your username & hostname of the system you'd like to log in to.


And that's it! Press enter you'll be prompted for the password & you are logged in !



Thursday 28 July 2016

Updating kernel package in Linux

In case of kernel upgrade in Linux, we do not need to mount the RHEL ISO on the server as we would not be performing patching of the entire package structure available on the server.
In kernel upgrade, we only install the following packages:

·         kernel-2.6.32-504.8.1.el6.x86_64.rpm
·         kernel-firmware-2.6.32-504.8.1.el6.x86_64.rpm
·         bfa-firmware-2.6.32-504.8.1.el6.x86_64.rpm


The main package is kernel-2.6.32-504.8.1.el6.x86_64.rpm & the remaining two are dependent packages.

To install the packages use the following command:

# yum localinstall kernel-2.6.32-504.8.1.el6.x86_64.rpm kernel-firmware-2.6.32-504.8.1.el6.x86_64.rpm bfa-firmware-2.6.32-504.8.1.el6.x86_64.rpm


After this reboot the server via ‘init 6’.

Linux interview questions

This is a post that is in its infancy & will undergo some updates from time to time.

But for now here are a few interview questions that I've been asked in recent years:

  1.  How to configure check which package provides a particular file?
  2. What are system calls & how do we check them?
  3. How to register a client with RHN?
  4. Explain steps to configure NIC bonding.
  5. What are kernel modules? How to list them & load them?
  6. How to update the OS without the use of RHN?
  7. When is "yum localinstall" useful?
  8. How to configure multipathing?
  9. How to roll back an OS update like RHEL 6.7 to 6.5?
  10. Can you rename network interface cards from the OS.
  11. Explain about performance monitoring tools available in Linux.
  12. How would you configure generation of a crash dump if a kernel panic occurs?
  13. How can you run a command every 10 seconds without using cron or at?
  14. Explain various RAID levels available in Linux.
  15. How can you migrate data residing in a file system from one storage vendor to another while utilizing LVM?
  16. What is the difference between INIT & systemd systems for OS initiallization?
  17. What are the major differences between RHEL 6 & RHEL 7?

Wednesday 27 July 2016

Using screen to run commands on multiple servers.

In a previous tutorial I talked about how screen can be used to share a terminal display remotely.
In this tutorial we'll be seeing how screen can be used to run commands across multiple servers.

First make sure screen is installed on the system. If not install it via yum.

Next create the first screen by running screen command.


You'll see the word "screen 0" above the terminal window.

Now create a second screen by typing the key sequence ctrl+a c.


You'll see the word "screen 1" above the terminal window.

Now, while in screen 1 ssh to another server.


I had already set up passwordless ssh so there was no password prompt.

So, we have two screens now. To move between the screens use the key combinations ctrl+a n & ctrl+a p to go the next & previous screens respectively.

If we have multiple screens open we can return to the original screen by typing the key sequence ctrl+a ".
This will give us a list of open screens. From the list select 0 which is our original screen.


From our original screen i.e. screen 0 we'll now launch a command which we want to be run on all screens.
To do this first type the key combination ctrl+a :.
This will give us a prompt. On the prompt type the following sequence:

at "#" stuff "uname -a^M"



Here's a breakdown of what we just did:

at means on the screen
# specifies that we want to run the command on all screens. If we want to run a command on a specific screen we can just type the screen number.
stuff means to stuff the screen buffer with the command or sequence of commands that follow.
^M is equivalent to the user pressing enter key after typing the command.

To close the screens type exit on the command prompt on the screen you want to close.
You can verify if any screens are open with screen -ls command.

Tuesday 26 July 2016

Fixing RECORDROUTINGINFO: UNABLE TO COLLECT IPV4 ROUTING TABLE”.

Today I created a new Linux virtual machine & as soon as I logged in to the console to configure it I was bombarded with the error “RECORDROUTINGINFO: UNABLE TO COLLECT IPV4 ROUTING TABLE”.

The guestInfo plugin is for use with the VMware Tools daemon (vmtoolsd). This plugin collects guest configuration and state information (eg. storage capacity, networking state) and makes this information available via the vSphere SDK.
The /proc/net/route file contains the routing table with the addresses in hexadecimal notation.

So the kernel was unable to display the routing table & the netstat -rn command presented with a single route to destination 169.254.0.0 culminating in a APIPA situation.

The cause of this error is that the iputils package causes a delay in the boot process and a warning message appears when the guestinfo plug-in tool fails to parse the content from the /proc/net/route file.

I went through a couple to forums & some mentioned the issue to be fixed after a vmotion. I wanted to keep vmotion as a last option.
A few Vmware KB articles mentioned adding the line rtc.diffFromUTC=0 in the VM configuration & I did it.

This change can be done in two ways:

  • First is that you manually edit the .vmx file of the VM which can be found in the VM folder in the data store housing the virtual machine.
  • Second is that if you are using Vsphere web client in Vpshere 5.5 you can go to edit settings > VM options > Configuration Parameters > Edit Configuration & add the parameter rtc.diffFromUTC & set its value to zero.

The next thing I did was reinstalled VMware tools & reboot the VM a couple of times.

I then set the gateway in /etc/sysconfig/network file & IP address & netmask in /etc/sysconfig/network-scripts/ifcfg-eth0 & restarted the network service.

With that I was finally able to bring the VM in the network.

Getting a Team Viewer like experience in UNIX with the screen command

Team Viewer is a very useful utility allowing us to share our computer screens remotely.
Consider a scenario wherein you & a colleague are working in different geographic locations & trying to troubleshoot a scenario or maybe you are trying to provide a knowledge transfer to a junior administrator regarding a task but the junior administrator does not work in the same geographic location as you.

In this situation using the screen command is a good option for you to be able to share your terminal session screen with anyone in real time. Here's how to do it:

Open up two terminal sessions for the same server & get the current shell PIDs.


On one of the terminal windows run the screen command & get the session id with 'screen -ls' command.


Now, attach to this screen on the second terminal session using the session id of the screen obtained from 'screen -ls' command.


That's it. Now any command you run on one terminal will be visible on the other terminal as shown in the below screen shots:


To end the screen sessions press ctrl+d.

The screen command was available out of the box in this Solaris 11 server but may not available in some Linux distributions by default. You can install screen command using a yum repository in case its not available.

Saturday 23 July 2016

Extracting files from RPMs

So why would you ever need to extract files from rpms. The rpm is a single neat program that we can install with rpm -ivh <rpm_name>. Why would we want to break it down?
Well, there can be a number of reasons ranging from a messed up configuration file to removal of the installed binary.

So in this example we'll breakdown the httpd rpm into its constituent files.

Before breaking down an rpm let first under stand what its made of?
Initially we take the directory structure that make up the package & put them in a .cpio file. Then we add a description & some dependency information & the file.cpio,description & dependency information bundled together make up the file.rpm.

To start off we'll use the yumdownloader program to download the rpm from the yum repository:


Now yumdownloader downloads the package only & not associated dependencies.
If you wish to download the dependencies as well use the command repotrack <package name>.

Once we have that we'll use the rpm2cpio command to strip the description & dependency information & get the rpm down to the cpio file.
We then run the cpio command with the cpio archive file being being redirected as input & this will extract the entire file structure in the current working directory.


Another cool thing about the whole process is that although we strip down the rpm to its directory structure the rpm2cpio command still preserves the original rpm file that we downloaded with yumdownloader.

Thursday 21 July 2016

Fix for Xmanager error "/usr/X11R6/bin or /usr/bin/term not found"

X 11 needs to be available for softwares like Oracle database/client & many other products to be installed.

Essentially the basic components that should be in place are the following:
  • x11 forwarding should be enabled in /etc/ssh/sshd_config file.
  • The packages libXmu & xorg-utils should be installed.

Recently I built some Linux VMs that required X11 to be enabled for GUI based installations.
So I did the work, met the pre-requisites for X11 & tested out the X11 functionality on a X11 tool & handed them over.
I was surprised when the application guy came up to me & said that it wasn't working with Xmanager.

I did some digging & it wasn't working. I got the below error when I ran xstart:


The fix for this error were two things:
  • Install the rpm xterm.
  • Create a soft link for /usr/bin/xterm to /usr/X11R6/bin/xterm.

Thursday 14 July 2016

A quick note on shared libraries

A shared library is basically some lines of code which when loaded into memory can be linked to a program at run time.
Multiple programs can share the libraries code loaded in memory & hence the name.
Shared libraries are easy to identify with names prefixed by lib & suffixed by .so.

For programs to make efficient use of shared libraries it's important to know their location.

The file /etc/ld.so.conf consists of path names that will be searched by the loader for shared libraries to be loaded.

The contents of the file for a fedora workstation are given below:

[user@linclient ~]#cat /etc/ld.so.conf
include ld.so.conf.d/*.conf
/usr/lib/mysql
/usr/X11R6/lib
/usr/lib/qt-3.3/lib
/opt/hp/lib
/opt/dce-1.1/lib
[user@linclient ~]#

If we want to view the shared libraries required by a program or command to work we can use the ldd command.
For example, if we want to see the shared libraries required by touch command:

[user@linclient ~]#ldd /bin/touch
        libc.so.6 => /lib/tls/libc.so.6 (0x00bc2000)
        /lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x00ba9000)
[user@linclient ~]#

It would be inefficient if every time a program needs to use a shared library the loader would check the /etc/ld.so.conf file, go through the path names &
then traverse each individual directory tree to finally load the share library object.

This task is taken care of by the ldconfig. Ldconfig creates the necessary links and cache to the most recent shared libraries found in the directories specified on the command line, in the file /etc/ld.so.conf, directories /lib and /usr/lib. The cache file is named ld.so.cache & resides in /etc directory.
If we make changes to the ld.so.conf file or add another conf file or shared library object  in the path names we just need to run the ldconfig command to update the cache.

If we want to see the contents of the cache we can run 'ldconfig -v' & it will display the contents of the cache while it rebuilds it.
Here's a snippet:

[user@linclient ~]# ldconfig -v
ldconfig: Can't stat /opt/hp/lib: No such file or directory
/usr/lib/mysql:
        libmysqlclient_r.so.10 -> libmysqlclient_r.so.10.0.0
        libmysqlclient.so.10 -> libmysqlclient.so.10.0.0
/usr/X11R6/lib:
        libdps.so.1 -> libdps.so.1.0
        libXcursor.so.1 -> libXcursor.so.1.0.2
        libxcin.so.0 -> libxcin.so.0.0.0
        libGL.so.1 -> libGL.so.1.2
        libOSMesa.so.4 -> libOSMesa.so.4.0
        libfontenc.so.1 -> libfontenc.so.1.0

If we want to run a program which will use a non-standard shared library for a temporary purpose then we can add the path to the library in the environment variable LD_LIBRARY_PATH.

Changing color of displayed text in bash shell

So...... I'm feeling a bit colorful & I'd like my shell prompt to reflect on my mood.
I decided to customize the MOTD displayed when I logged in & make it a bit more colorful.

We need to know the color codes for the different colors provided by ANSI escape codes given below:


Black        0;30     Dark Gray     1;30
Red          0;31     Light Red     1;31
Green        0;32     Light Green   1;32
Brown/Orange 0;33     Yellow        1;33
Blue         0;34     Light Blue    1;34
Purple       0;35     Light Purple  1;35
Cyan         0;36     Light Cyan    1;36
Light Gray   0;37     White         1;37

Then using these codes I added the following lines to my .bash_profile file:

##Custom MOTD for me##

BLUE='\033[0;34m'
LB='\033[1;34m'
NC='\033[0m' # No Color
printf "${LB}=================${NC} \n"
printf "${BLUE}Welcome Sahil${NC} \n"

printf "${LB}=================${NC} \n"


So whenever I login I get this prompt:


On similar lines if we want to modify the shell prompt/PS1 to be of the color green this can be accomplished by adding the following lines in .bash_profile file.:

GREEN='\033[0;32m'
NC='\033[0m' # No Color
PS1="${GREEN}[`id -un`@`hostname | cut -d "." -f 1` ~]#${NC}"

Setting up GUDS to be run as a simpler start/stop script

GUDS is a performance gathering script available from oracle.
It collects data periodically, based on input parameters and places data into output files with time stamps.
The script is useful when we need to analyze server performance during high load situations.

GUDS options are as follows:

-b : Bind guds to the processor cpuid
-c : Count value for commands that require it
      (vmstat, prstat, iostat, mpstat, sar, ...)
-d : A one line statement describing the state of the system during this               capture
-D : Change the default directory from /var/tmp/guds to &lt;dir&gt;
-g : If specified, skip collecting static configuration data
-h : Display this help text
-H : Run the script for the given number of hours, else for the given number
      of iterations if hours is zero
-i : Interval value for commands that require it
      (vmstat, prstat, iostat, mpstat, sar, ...)
-l : If specified and level >= 2, collect lockstat -H data
-L : Specify the interval during which lockstat is gathering data
      (default: 2 seconds)
-n : The number of iterations the script will do to collect data
-p : If specified, on SPARC, and level >= 2, collect trapstat data
-P : Overide guds exit on perl binary error
-q : Run in non interactive mode
-r : If specified, collect prstat cpu,rss,size, and zones data if able
-R : If specified, allow script to run when uid is not root
-s : The SR # used to create the directory where the data are going to be stored
-S : If specified, mask all IP addresses in the data
-T : Emit timestamps for commands that loop 
      (vmstat, mpstat, prstat, iostat)
-v : Print the GUDS Version
-w : Wait time between each iteration (default: 0 seconds)
      If set to 0, then the next iteration will start when the previous finishes
-x : Run this extra command during each iteration
      Output saved in xtra.out - Errors saved in xtra.err
-X : Run the extended set of commands depending on specified level
      level 0 : nothing
      level 1 : lockstat for contention events
      level 2 : trapstat (if -p), lockstat profiling, threadlist, TNF tracing (default)
      level 3 : kmastat, kmausers, memstat (they can take a long time to complete
                on systems with a lot of memory)Increasing extended level can affect system performance.
 -Z : If specified, allow script to run in a non-global zone


Here is a short script with which guds can be run without having to specify all required parameters every time.
This script can be put in /etc/init.d & be run in a similar fashion like other scripts available in the /etc/init.d directory.

$ cat guds
#!/sbin/sh
#
# Script to start/stop GUDS perf collection scripts
#
case "$1" in
'start')
        cd /root
        /var/tmp/guds_3_6 -q -i 5 -T -c 17500 -n 1 -s `/usr/bin/date '+%Y%m%d%H%M'` -w 0 -X 2 -D /storage/performance/output -d "Performance Issues " &
        echo 'GUDS starting.'
        ;;

'stop')
        /usr/bin/kill `/usr/bin/ps -eo 'pid,ppid,fname'|/usr/bin/egrep -i guds|/usr/bin/egrep -v " 1 "|/usr/bin/awk '{print $1}'`
        echo 'GUDS stopping.'
        ;;

*)
        echo "Usage: $0 { start | stop }"
        exit 1
        ;;

esac
exit 0

This script can be scheduled as a cron job like the one in the example given below:

# Restart GUDS performance gathering tool at midnight every day
0 0 * * * /etc/init.d/guds stop; sleep 120; /etc/init.d/guds start

The timings can be adjusted as per requirements.

Wednesday 13 July 2016

Linux OS initialization systems

The initialization system is basically the part of the OS which controls how the various services managed by the OS kernel startup. Linux essentially offers three init systems which are:

·         SystemVinit
·         Upstart
·         Systemd

This article briefly describes these systems.

SystemVinit:

It manages service management & startup with implementation of the concept of run levels wherein a certain set of services are operational at a certain operational level.
The following run levels are used in CentOS/SuSe/RedHat:

1.       Run level 0 (halt)
2.       Run level 1 (single user mode)
3.       Run level 2 (Multi user mode)
4.       Run level 3 (Multi user mode with n/w enabled)
5.       Run level 4 (unused)
6.       Run level 5 (Multi user mode with n/w & GUI enabled)
7.       Run level 6 (reboot)

In case of Debian based Linux distributions the run level 2 is somewhat analogous to run level 5 for CentOS & run levels 3,4 & 5 are clones of run level 2.To change a run level you can use telinit/init followed by the run level number. For example, to reboot the system type telinit 6 or init 6.
To change the default run level of a system modify the initdefault entry in /etc/inittab file.


Upstart:

Termed as a successor to SystemVinit. It provides a faster boot time as compared to SytemVinit by not relying on the startup/shutdown scripts in /etc/rc#.d directories for service startup. It ensured a more parallel service startup in a way that dependent services followed a dependency tree but services with no dependencies could start up quicker in a parallel fashion.

Systemd:

This is a more efficient, faster & complex init system which has been used in CentOS 7. It replaces run levels & correlates them with something called boot targets. For example run level 3 corresponds to multi-user.target & run level 6 corresponds to reboot.target. For further exploration on boot targets one should look around in /etc/systemd/system & /usr/lib/system/system. Systemd uses the systemctl tool for management of boot targets.

To change to graphical boot target we’d type:
#systemctl isolate graphical.target

To check the default boot target:
#systemctl get-default

To change the default boot target:
#systemctl set-default <target name>

The commands telinit & init still work in CentOS 7 with its systemd implementation but they’ve been re-written in such a way that they actually implement systemctl commands at the backend. So just because init command works in CentOS 7 does not mean that its using SystemVinit.

Tuesday 12 July 2016

Installation of HP DP client

Installation of HP DP client is a straight forward process if all goes well.

In this example I'm installing HP DP version 9 on a Linux VM.

Mount the HP DP client iso image.

[root@linclient ~]# mount /dev/sr0 /mnt
mount: block device /dev/sr0 is write-protected, mounting read-only
[root@linclient ~]#

Next go to the LOCAL_INSTALL directroy & run the omnisetup.sh script.

(We only need to install the disk agent & if there is a DB running like oracle in my case then install the oracle integration agent too)

[root@linclient LOCAL_INSTALL]#pwd
/mnt/LOCAL_INSTALL
[root@linclient LOCAL_INSTALL]# ./omnisetup.sh
  No Data Protector software detected on the target system.

  Install (da) Disk Agent (YES/no/Quit)?
yes
  Install (ndmp) NDMP Media Agent (yes/NO/Quit)?
no
  Install (ma) Media Agent (YES/no/Quit)?
no
  Install (cc) User Interface (yes/NO/Quit)?
no
  Install (docs) English Documentation (Guides, Help) (yes/NO/Quit)?
no
  Install (jpn_ls) Japanese Documentation (Guides, Help) (yes/NO/Quit)?
no
  Install (fra_ls) French Documentation (Guides, Help) (yes/NO/Quit)?
no
  Install (chs_ls) Chinese Documentation (Guides, Help) (yes/NO/Quit)?
no
  Install (autodr) Automatic Disaster Recovery Module (yes/NO/Quit)?
no
  Install (StoreOnceSoftware) StoreOnce Software deduplication (yes/NO/Quit)?
no
  Install (vmware) VMware Integration (yes/NO/Quit)?
no
  Install (vepa) Virtual Environment Integration (yes/NO/Quit)?
no
  Install (vmwaregre_agent) VMware Granular Recovery Extension Agent Integration (yes/NO/Quit)?
no
  Install (db2) IBM DB2 Integration (yes/NO/Quit)?
no
  Install (informix) Informix Integration (yes/NO/Quit)?
no
  Install (lotus) Lotus Integration (yes/NO/Quit)?
no
  Install (oracle8) Oracle Integration (yes/NO/Quit)?
yes
  Install (sapdb) SAP DB Integration (yes/NO/Quit)?
no
  Install (saphana) SAP HANA Integration (yes/NO/Quit)?
no
  Install (sap) SAP R/3 Integration (yes/NO/Quit)?
no
  Install (sybase) Sybase Integration (yes/NO/Quit)?
no
  Install (ssea) HP StorageWorks Disk Array XP Agent (yes/NO/Quit)?
no
  Install (smisa) HP StorageWorks EVA SMI-S Agent (yes/NO/Quit)?
no


  Packets going to be (re)installed: omnicf ts_core integ da oracle8


  Installing Core (omnicf)...


Data Protector software package successfully installed
  Installing Core (ts_core)...


Data Protector software package successfully installed
  Installing Core of Integrations (integ)...


Data Protector software package successfully installed
  Installing Disk Agent (da)...


Data Protector software package successfully installed
  Installing Oracle Integration (oracle8)...


Data Protector software package successfully installed
  Client was not imported into the cell.
  Please, perform the import manually

  Installation/upgrade session finished.
[root@linclient LOCAL_INSTALL]#

To verify the setup check for rpms:

[root@linclient LOCAL_INSTALL]# rpm -qa | grep -i OB
oddjob-0.30-5.el6.x86_64
OB2-CORE-A.09.00-1.x86_64
pygobject2-2.20.0-5.el6.x86_64
OB2-DA-A.09.00-1.x86_64
libbasicobjects-0.1.1-11.el6.x86_64
perl-Object-Accessor-0.34-136.el6_6.1.x86_64
OB2-INTEG-A.09.00-1.x86_64
OB2-TS-CORE-A.09.00-1.x86_64
OB2-OR8-A.09.00-1.x86_64
oddjob-mkhomedir-0.30-5.el6.x86_64
[root@linclient LOCAL_INSTALL]#

To check HP DP client version run the following command:

[root@linclient omni]# /opt/omni/bin/omnicc -ver
HP Data Protector A.09.00: OMNICC, internal build 87, built on Wed 11 Jun 2014 12:08:18 AM AEST

In case you want to uninstall the client remove the following rpms:

rpm -e OB2-TS-CORE-A.09.00-1.x86_64
rpm -e OB2-DA-A.09.00-1.x86_64
rpm -e OB2-CORE-A.09.00-1.x86_64

Today I ran into a situation wherein I had to install the oracle integration library on a server with the client already installed.
I had to remove & re-install the clinet from scratch to do that.

OVM for SPARC Primer part 5

Modifying resource allocations on ldoms:


To get a detailed view of current configuration & resource allocations of the guest domain type:

#ldm list-bindings domainname

To increase CPU/memory resources on an ldom type the following:

bash-3.00# ldm list-domain  domain1
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
domain1          inactive   ------          2     1G
bash-3.00# ldm set-mem 2g  domain1
bash-3.00# ldm set-vcpu 4  domain1
bash-3.00#
bash-3.00# ldm list-domain  domain1
NAME             STATE      FLAGS   CONS    VCPU  MEMORY   UTIL  UPTIME
domain1          inactive   ------          4     2G
bash-3.00#

To check for available resources go to primary domain & get total available resources from prtdiag command & subtract from that the aggregate of resources currently allocation to all the ldoms.

You can modify resource allocations to guest domains on the fly but in case of primary domain you need a reboot.

For troubleshooting purpose you may need to capture a crash dump requiring a panic to be executed.
You can panic a guest domain using the following command:

bash-3.00# ldm panic domain1


Some basic troubleshooting checks:


·         Check the status of required services

bash-3.00# svcs -l ldmd
fmri         svc:/ldoms/ldmd:default
name         Logical Domains Manager
enabled      true
state        online
next_state   none
state_time   Wed May 27 19:30:18 2015
logfile      /var/svc/log/ldoms-ldmd:default.log
restarter    svc:/system/svc/restarter:default
contract_id  42
dependency   require_all/none svc:/system/filesystem/local (online)
dependency   require_all/none svc:/network/loopback (online)
bash-3.00#
bash-3.00# svcs -l vntsd
fmri         svc:/ldoms/vntsd:default
name         virtual network terminal server
enabled      true
state        online
next_state   none
state_time   Wed May 27 19:30:21 2015
logfile      /var/svc/log/ldoms-vntsd:default.log
restarter    svc:/system/svc/restarter:default
contract_id  60
dependency   optional_all/error svc:/milestone/network (online)
dependency   optional_all/none svc:/system/system-log (online)
bash-3.00#

·         If there are any issues with services then check their corresponding log files for further analysis.

·         If the ldom manager version is greater than 3.0 you can view individual guest domain logs under /var/log/vntsd.

·         You can run explorer with ldom option to gather ldom related logs.

·         Check for consistency of /var/opt/SUNWldm/ldom-db.xml file.

Using capture groups in grep in Linux

Introduction Let me start by saying that this article isn't about capture groups in grep per se. What we are going to do here with gr...