Wednesday 20 April 2016

Recovering a zone in shutting down state due to 'network interface' error

Recently I came accross an issue wherein a Solaris 10 branded zone on a Solaris 11 physical server got stuck in shutting_down state while being halted.

root@global_zone:~# zoneadm list -icv | grep -i down
  11 localzone        shutting_down /zones/localzone               solaris10 excl

I was not able to kill its zsched & zfs dataset related processes

root@global_zone:~# pgrep -fl -z localzone
43048 zsched
root@global_zone:~#
root@global_zone:~# ptree -z localzone
43048 zsched
root@global_zone:~#
root@global_zone:~# ps -ef | grep -i localzone
    root 25920     0   0   Mar 02 ?         148:59 zpool-localzone_dpool01
    root 20292     0   0   Mar 02 ?         219:32 zpool-localzone_rpool
    root 42110     0   0   Mar 02 ?          15:54 zpool-localzone_dpool02

I could see that the zfs data sets were visible on the global zone but were not mounted.

root@global_zone:~# zpool list  | grep -i localzone
localzone_dpool01  49.8G  18.6G  31.2G  37%  1.00x  ONLINE  -
localzone_dpool02  49.8G  33.2G  16.5G  66%  1.00x  ONLINE  -
localzone_rpool     119G  48.2G  70.8G  40%  1.00x  ONLINE  -
root@global_zone:~#
root@global_zone:~# zfs list | grep -i localzone
localzone_dpool01                                  18.6G  30.4G    31K  /localzone_dpool01
localzone_dpool01/u00                              18.5G  30.4G  18.5G  /u00
localzone_dpool02                                  33.2G  15.8G    31K  /localzone_dpool02
localzone_dpool02/dump                             33.2G  15.8G  33.2G  /export/dump
localzone_rpool                                    48.2G  68.9G    34K  /zones/localzone
localzone_rpool/rpool                              48.2G  68.9G   104K  /rpool
localzone_rpool/rpool/ROOT                         31.0G  68.9G    31K  legacy
localzone_rpool/rpool/ROOT/zbe-0                   81.5M  68.9G  11.1G  /
localzone_rpool/rpool/ROOT/zbe-1                    310M  68.9G  20.0G  /
localzone_rpool/rpool/ROOT/zbe-2                   30.6G  68.9G  21.2G  /
localzone_rpool/rpool/export                       15.2G  68.9G  2.12G  /export
localzone_rpool/rpool/export/home                  13.1G  68.9G  13.1G  /export/home
localzone_rpool/rpool/hta                          2.06G  17.9G  2.06G  /hta
root@global_zone:~#
root@global_zone:~# df -h /zones/localzone
Filesystem             Size   Used  Available Capacity  Mounted on
rpool/ROOT/solaris     547G   2.6G       382G     1%    /

If I tried to halt the zone I was getting a network interface related error.

root@global_zone:/tmp# zoneadm -z localzone halt 
zone 'localzone': End any processes using the zone's network interfaces and re-try 
zone 'localzone': unable to destroy zone 
zoneadm: zone 'localzone': call to zoneadmd failed 

The workaround for this issue is as follows:

Step 1: Manually delete the VNICs assigned to this zone.

root@global_zone:/tmp# dladm show-vnic | grep localzone 
localzone/vnic3 aggr0 0 2:8:20:b7:64:98 random 2611 
localzone/vnic4 aggr1 0 2:8:20:ee:2f:ce random 0 

root@global_zone:/tmp# dladm delete-vnic localzone/vnic4 
root@global_zone:/tmp# dladm delete-vnic localzone/vnic3 

Step2: Manually mount the zfs datasets associated with the zone

zfs set mountpoint=/zones/localzone/root/export localzone_rpool/rpool/export
zfs set mountpoint=/zones/localzone/root/export/home localzone_rpool/rpool/export/home
zfs set mountpoint=/zones/localzone/root/hta localzone_rpool/rpool/hta
zfs set mountpoint=/zones/localzone/root/rpool localzone_rpool/rpoolAfter this we were able to successfully halt & boot the zone.

Step 3: Detach the zone

zoneadm -z localzone detach -F

Step 4: Export the zfs datasets

zpool export localzone_dpool02
zpool export localzone_dpool01
zpool export localzone_rpool

Step 5: Now attach the zone & boot

zoneadm -z localzone attach
zoneadm -z localzone boot

No comments:

Post a Comment

Using capture groups in grep in Linux

Introduction Let me start by saying that this article isn't about capture groups in grep per se. What we are going to do here with gr...