Tuesday 19 April 2016

Performing a live upgrade on a Solaris global zone with local zones running

A live upgrade of a global zone with local zones installed does not work right out of the box & requires some additional patches to be installed.

If you try to create a BE without the additional patches the 'lucreate' command will fail as shown below:

root@global:/# lucreate -n Sol10_jan14 
Analyzing system configuration. 
Comparing source boot environment <10_1009> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment. 
Updating boot environment description database on all BEs. 
Updating system configuration files. 
Creating configuration for boot environment <Sol10_jan14>. 
Source boot environment is <10_1009>. 
Creating boot environment <Sol10_jan14>. 
Cloning file systems from boot environment <10_1009> to create boot environment <Sol10_jan14>. 
Creating snapshot for <rpool/ROOT/10_1009> on <rpool/ROOT/10_1009@Sol10_jan14>. 
Creating clone for <rpool/ROOT/10_1009@Sol10_jan14> on <rpool/ROOT/Sol10_jan14>. 
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/Sol10_jan14>. 
Creating snapshot for <local2os/os> on <local2os/os@Sol10_jan14>. 
Creating clone for <local2os/os@Sol10_jan14> on <local2os/os-Sol10_jan14>. 
cannot mount 'local2os/os-Sol10_jan14': filesystem already mounted 
ERROR: Failed to mount dataset <local2os/os-Sol10_jan14> 
Creating snapshot for <local1os/os> on <local1os/os@Sol10_jan14>. 
Creating clone for <local1os/os@Sol10_jan14> on <local1os/os-Sol10_jan14>. 
cannot mount 'local1os/os-Sol10_jan14': filesystem already mounted 

Given below are the details of local zone pools:

df -h | egrep 'local1|local2'
local1os/os          20G  10.0G   8.1G    56%    /zones/local1
local2os/os         236G   6.2G   229G     3%    /zones/local2

zpool list | egrep 'local1|local2'

root@global:/# zpool list
NAME                   SIZE  ALLOC   FREE  CAP  HEALTH  ALTROOT
rpool                  136G  89.3G  46.7G  65%  ONLINE  -
local1os             20G  7.81G  12.2G  39%  ONLINE  -
local2os            240G  7.70G   232G   3%  ONLINE  -
root@global:/#

The reason for the failure is that without the required patches the system is not able to take snapshots of the zone related data sets.
A live upgrade operation can't proceed without the presence of snapshot backup of source BE.

To fix this issue, install the following mandatory patches:

- 119254-90 Install and Patch Utilities Patch 
- 121430-92 Live Upgrade patch 
- 121428-15 SUNWluzone required patches 
- 138130-01 vold patch 
- 146578-06 cpio patch (146578-06 is obsoleted by 148027 patch chain) 

The patches can be downloaded from Oracle Supports' website & installed on the global zone with patchadd command.

Once the patches have been installed the lucreate command should run without issues & you'll see the following output:

root@global:/# lucreate -n TEST_BE
Analyzing system configuration.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <TEST_BE>.
Source boot environment is <Sol_BE_14>.
Creating file systems on boot environment <TEST_BE>.
Populating file systems on boot environment <TEST_BE>.
Temporarily mounting zones in PBE <Sol_BE_14>.
Analyzing Primary boot environment.
WARNING: Zonepath </zones/local1> of zone <local1> lies on a filesystem shared between BEs, remapping zonepath to </zones/local1-TEST_BE>.
WARNING: Filesystem <local1os/os> is shared between BEs, remapping to <local1os/os-TEST_BE>.
WARNING: Zonepath </zones/local2> of zone <local2> lies on a filesystem shared between BEs, remapping zonepath to </zones/local2-TEST_BE>.
WARNING: Filesystem <local2os/os> is shared between BEs, remapping to <local2os/os-TEST_BE>.
Processing alternate boot environment.
ZFS Datasets for which snapshot and clone will be created for BE <TEST_BE> are:
rpool/ROOT/Sol_BE_14
local1os/os
local2os/os
Duplicating ZFS datasets from PBE to ABE.
Creating snapshot for <rpool/ROOT/Sol_BE_14> on <rpool/ROOT/Sol_BE_14@TEST_BE>.
Creating clone for <rpool/ROOT/Sol_BE_14@TEST_BE> on <rpool/ROOT/TEST_BE>.
Creating snapshot for <local1os/os> on <local1os/os@TEST_BE>.
Creating snapshot for <local2os/os> on <local2os/os@TEST_BE>.
Creating clone for <local2os/os@TEST_BE> on <local2os/os-TEST_BE>.
Mounting ABE <TEST_BE>.
Generating list of files to be copied to ABE.
Finalizing ABE.
Remapping zonepaths in <TEST_BE>.
Zonepath of zone <local1> in BE <TEST_BE> is set to </zones/local1-TEST_BE>.
Zonepath of zone <local2> in BE <TEST_BE> is set to </zones/local2-TEST_BE>.
Unmounting ABE <TEST_BE>.

Fixing properties of ZFS datasets in ABE.
Reverting state of zones in PBE <Sol_BE_14>.
Making boot environment <TEST_BE> bootable.
Population of boot environment <TEST_BE> successful.
Creation of boot environment <TEST_BE> successful.
root@global:/#

You can validate the creation of the new BE by running lustatus as shown below:

root@global:/# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
10_1009                    yes      no     no        yes    -
Sol_BE_14                  yes      yes    yes       no     -
TEST_BE                    yes      no     no        yes    -

To delete the BE run the following command:

root@vfecos041:/# ludelete TEST_BE
WARNING: Deleting ZFS dataset <rpool/ROOT/TEST_BE>.
WARNING: Deleting ZFS dataset <local1os/os-TEST_BE>.
WARNING: Deleting ZFS dataset <local2os/os-TEST_BE>.
Updating boot environment configuration database.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.

No comments:

Post a Comment

Using capture groups in grep in Linux

Introduction Let me start by saying that this article isn't about capture groups in grep per se. What we are going to do here with gr...