Gopi Desaboyina Solaris Blogs

Just another weblog

Solaris Live upgrading & updating to latest Solaris Patches using LU method.

Solaris Live upgrading or updating to latest Solaris Patches using Live upgrade method.

It’s a wonder feature Solaris has for updating to latest Solaris or applying recommended patching without much downtime.
Ok..let’s dive into how to do that.This can be used from Solaris 9 to Solaris 10 or Solaris 10 Update x to Update y.

For live upgrade you need to have free disk. Normally most of the sites will have mirrored disks, you can use one disk out of it by breaking the mirror.
In my case I’ve two disks mirrored. c0t0d0 and c0t1d0. Below are my file systems.

Disk Mounted SVM_Meta_Vol
c0t0d0s0 – / { root} – d30
c0t0d0s1 – swap – d31
c0t0d0s2 – Entire Disk
c0t0d0s3 – /var – d33
c0t0d0s4 – /opt – d34
c0t0d0s5 – /export/home – d35
c0t0d0s6 – /var/crash – d36
c0t0s0s7 – Used for my SVM metadata.

Below is my meta volumes.
# metastat -p
d36 -m d16 d26 1
d16 1 1 c0t0d0s6
d26 1 1 c0t1d0s6
d34 -m d14 d24 1
d14 1 1 c0t0d0s4
d24 1 1 c0t1d0s4
d33 -m d13 d23 1
d13 1 1 c0t0d0s3
d23 1 1 c0t1d0s3
d31 -m d11 d21 1
d11 1 1 c0t0d0s1
d21 1 1 c0t1d0s1
d30 -m d10 d20 1
d10 1 1 c0t0d0s0
d20 1 1 c0t1d0s0
d35 -m d15 d25 1
d15 1 1 c0t0d0s5
d25 1 1 c0t1d0s5

Currently I’m running Solaris 10 u4. I want to update it Update 7 and put 10_Recommended Patches on it.
I’ll be using c0t1d0 as alternative boot environment and put update 7 in there first.
Since currently both disks are mirrored, I need to break the mirror using. metaclear like below.
for i in 0 1 3 4 5 6 ; do metadetach d3${i} d2${i} ; metaclear d2${i} ;done . Now our 2nd disk is free.
I’ve download Solaris 10 u7 iso and 10_Recommended from & and transfered that to my system.
Mount the CD using below steps.

# lofiadm -a /opt/CD/sol-10-u7-ga-sparc-dvd.iso
# mount -F hsfs /dev/lofi/1 /mnt

Once CD is mounted. I need to upgrade SUNWlu* packages on active boot env ( c0t0d0 ) to make sure live upgrade goes clean without issue.
For that first remove the existing packages on active boot env using

pkgrm SUNWlur SUNWluu.
Note : There will be a package called SUNWluzone. Don’t remove that. That’s required for zones. If it’s not there you won’t be able to install any zone.

After package removal. We need to install the same packages from CD. You can do that using script provided in the CD.
/mnt/Solaris_10/Tools/Installers/liveupgrade20 -noconsole -nodisplay
which install above packages silently. Once that’s done,.we’ve all required packages for live upgrade.

Next step is to create a alternative boot environment using lucreate which I used below.
#lucreate -c “Sol10u4” -C /dev/dsk/c0t0d0s2 \
-m /:/dev/dsk/c0t1d0s0:ufs \
-m /var:/dev/dsk/c0t1d0s3:ufs \
-m /opt:/dev/dsk/c0t1d0s4:ufs \
-m /export/home:/dev/dsk/c0t1d0s5:ufs \
-m /var/crash:/dev/dsk/c0t1d0s6:ufs \
-m /zones:/dev/vx/dsk/c1t2dg/c1t2dgvol02:vxfs \
-n “Solaris10u7”

-c For naming current active boot environment. Since I’m running update 4. I’ve used Sol10u4.
-C current root disk. Normally lucreate finds it automatically. In case of SVM it might not be able to find out then you can specify the current active boot disk using this option.
-m For specifying where to create the alternative boot environment. I’ve specified all file systems /,/var,/opt,/export/home,/var/crash./zones is where I’ve one local zone which is created on vxfs file system.Since I’ve to update that local zone too, I’ve created new veritas volume and I’m using that for alternative for local zone which is mounted on /zones.In case if you’ve local zones on zfs that would be very easy. Just take zfs snapshot that should do pretty much and in case of disaster you can rollback to that snap shot.
-n It’s the name for alternative boot environment. Since I’m going to put update 7. I’ve named it as Solaris10u7.

# output from above lucreate.

Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
Comparing source boot environment <Sol10u4> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices

Updating system configuration files.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <Solaris10u7>.
Source boot environment is <Sol10u4>.
Creating boot environment <Solaris10u7>.
Creating file systems on boot environment <Solaris10u7>.
Creating <ufs> file system for </> in zone <global> on </dev/dsk/c1t0d0s0>.
Creating <ufs> file system for </export/home> in zone <global> on </dev/dsk/c1t0d0s5>.
Creating <ufs> file system for </opt> in zone <global> on </dev/dsk/c1t0d0s4>.
Creating <ufs> file system for </var> in zone <global> on </dev/dsk/c1t0d0s3>.
Creating <ufs> file system for </var/crash> in zone <global> on </dev/dsk/c1t0d0s6>.
Creating <vxfs> file system for </zones> in zone <global> on </dev/vx/dsk/c1t2dg/c1t2dgvol02>.
Mounting file systems for boot environment <Solaris10u7>.
Calculating required sizes of file systems for boot environment <Solaris10u7>.
Populating file systems on boot environment <Solaris10u7>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Populating contents of mount point </export/home>.
Populating contents of mount point </opt>.
Populating contents of mount point </var>.
Populating contents of mount point </var/crash>.
Populating contents of mount point </zones>.
Creating shared file system mount points.
Copying root of zone <oracle.z1> to </.alt.tmp.b-5mc.mnt/zones/oracle.z1>.
WARNING: The file </tmp/lucopy.errors.8961> contains a list of <6>
potential problems (issues) that were encountered while populating boot
environment <Solaris10u7>.
INFORMATION: You must review the issues listed in
</tmp/lucopy.errors.8961> and determine if any must be resolved. In
general, you can ignore warnings about files that were skipped because
they did not exist or could not be opened. You cannot ignore errors such
as directories or files that could not be created, or file systems running
out of disk space. You must manually resolve any such problems before you
activate boot environment <Solaris10u7>.
Creating compare databases for boot environment <Solaris10u7>.
Creating compare database for file system </zones>.
Creating compare database for file system </var/crash>.
Creating compare database for file system </var>.
Creating compare database for file system </opt>.
Creating compare database for file system </export/home>.
Creating compare database for file system </>.
Updating compare databases on boot environment <Solaris10u7>.
Making boot environment <Solaris10u7> bootable.
Setting root slice to </dev/dsk/c1t0d0s0>.
Population of boot environment <Solaris10u7> successful.
Creation of boot environment <Solaris10u7> successful.

After that make sure you look at /tmp/lucopy.errors.8961 errors as lucreate said.

Now it created two boot environments active environment named as Sol10u4 and alternative named as Solaris10u7. You can list that out using lustatus and lufslist like below.

# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
————————– ——– —— ——— —— ———-
Sol10u4 yes yes yes no –
Solaris10u7 yes no no yes –

# lufslist -n Solaris10u7
boot environment name: Solaris10u7

Filesystem fstype device size Mounted on Mount Options
———————– ——– ———— ——————- ————–
/dev/md/dsk/d31 swap 68721377280 – –
/dev/dsk/c1t0d0s0 ufs 17182949376 / logging
/dev/dsk/c1t0d0s3 ufs 6450118656 /var logging
/dev/dsk/c1t0d0s5 ufs 1083703296 /export/home logging
/dev/dsk/c1t0d0s4 ufs 27488550912 /opt logging
/dev/dsk/c1t0d0s6 ufs 25779634176 /var/crash logging
/dev/vx/dsk/c1t2dg/c1t2dgvol02 vxfs 73378004992 /zones suid

Next step is to upgrade the OS on the alternative boot environment ( Solaris10u7) to Update 7. using below command.

# luupgrade -u -n Solaris10u7 -s /mnt
-u for upgrade
-s for location of Update 7 CD path.
It Produces the following output and it might take a while as we’ve one local zone also.
These all things happening without effecting current active environment. Wonderful…ha…

42126 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </mnt/Solaris_10/Tools/Boot>
Validating the contents of the media </mnt>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <Solaris10u7>.
Determining packages to install or upgrade for BE <Solaris10u7>.
Performing the operating system upgrade of the BE <Solaris10u7>.
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Updating package information on boot environment <Solaris10u7>.
Package information successfully updated on boot environment <Solaris10u7>.
Adding operating system patches to the BE <Solaris10u7>.
The operating system patch installation is complete.
INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot
environment <Solaris10u7> contains a log of the upgrade operation.
INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot
environment <Solaris10u7> contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment <Sol10u4>. Before you activate boot
environment <Solaris10u7>, determine if any additional system maintenance is
required or if additional media of the software distribution must be
The Solaris upgrade of the boot environment <Solaris10u7> is complete.
Installing failsafe
Failsafe install is complete.

Ok..Good..that’s done. Our alternative boot environment is updated to 7. we should activate the alternative boot environment and boot from there.
Pls read the o/p of luactivate carefully. It says you need to use only init or shutdown commands.

# luactivate Solaris10u7


The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.


In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).

2. Change the boot device back to the original boot environment by typing:

setenv boot-device /pci@0/pci@0/pci@2/scsi@0/disk@1,0:c

3. Boot to the original boot environment by typing:



Modifying boot archive service
Activation of boot environment <Solaris10u7> successful.

# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
————————– ——– —— ——— —— ———-
Sol10u4 yes yes no no –
Solaris10u7 yes no yes no –

init 0 and boot from disk1 ( check your obp aliases). It should be with Solaris 10 u7 and you are good to go. If you’ve issues booting from 2nd disk due to some installation issues you can go back to original Boot environment ( Sol10u4 or boot disk0) . Then mount the alternative boot environment /var file system and look for upgrade logs in file /var/sadm/system/logs/upgrade_log.

If boot disk1 is successful. You can reboot back to update 4 ( disk0) and apply the patches on Update 7 ( disk1) as mentioned below.

Same way you can install the 10_Recommended patches to alternative boot environment. Solaris install_cluster has option -B for specifying alternative boot environment. Before jumping and applying patches we need to install pre-required patches on current active boot environment. You can do that using.

# ./installcluster –apply-prereq –s10cluster
Setup …….

Solaris 10 SPARC Recommended Patch Cluster (2009.09.09)

Application of patches started : 2009.09.16 10:38:52

Applying 120900-04 (1 of 8 ) … skipped
Applying 121133-02 (2 of 8 ) … skipped
Applying 119254-70 (3 of 8 ) … success
Applying 119317-01 (4 of 8 ) … skipped
Applying 121296-01 (5 of 8 ) … skipped
Applying 127884-01 (6 of 8 ) … skipped
Applying 140171-04 (7 of 8 ) … success
Applying 139969-02 (8 of 8 ) … success

Application of patches finished : 2009.09.16 10:40:03

Following patches were applied :
119254-70 140171-04 139969-02

Following patches were skipped :
Patches already applied
120900-04 121133-02 119317-01 121296-01 127884-01

Installation of prerequisite patches complete.

Install log files written :

Now Install the 10_Recommended patches using -B option.

# ./installcluster -B Solaris10u7 –s10cluster

Setup ……

Solaris 10 SPARC Recommended Patch Cluster (2009.09.09)

Application of patches started : 2009.09.16 10:40:57

Applying 120900-04 ( 1 of 157 ) … skipped
Applying 121133-02 ( 2 of 157 ) … skipped
Applying 119254-70 ( 3 of 157 ) … success
Applying 119317-01 ( 4 of 157 ) … skipped
Applying 121296-01 ( 5 of 157 ) … skipped
Applying 127884-01 ( 6 of 157 ) … skipped
Applying 140171-04 ( 7 of 157 ) … success
Applying 139969-02 ( 8 of 157 ) … success
Applying 120719-02 ( 9 of 157 ) … skipped
Applying 126868-03 ( 10 of 157 ) … success
…<out put skipped >

Application of patches finished : 2009.09.16 11:07:43

Following patches were applied :
119254-70 140386-04 121104-10 120094-24 119115-35
140171-04 120410-32 140921-02 139604-06 140074-09
139969-02 139608-05 141690-02 140179-03 119783-13
126868-03 124188-03 141930-01 140917-02 139966-05
120272-25 121308-18 138874-05 141414-10 141020-03
125555-05 119313-28 139982-04 125719-22 125332-07
118666-22 118667-22 142286-01 119900-09 141742-04

Following patches were skipped :
Patches already applied
120900-04 125547-02 136998-06 119986-03 136839-01
121133-02 125503-02 123611-04 120543-14 139606-02
119317-01 120011-14 123893-15 126440-01 138181-01
121296-01 127127-11 124444-01 123590-10 122259-02
<o/p skipped>
Patches obsoleted by one or more patches already applied
118731-01 124204-04 122660-10 119090-31
Patches not applicable to packages on the system
138824-04 126363-07 121211-02 138822-04 142138-01
120811-09 140455-01 142290-01 137004-05 122472-07
120412-10 139943-01 126365-14 121181-04 120414-24

Installation of patch set to alternate boot environment complete.

Please remember to activate boot environment Solaris10u7 with luactivate(1M)
before rebooting.

Install log files written :

# luactivate Solaris10u7

and reboot back to disk1. You should be with latest patches and Update 7. if everything is working as expected. Then you can delete the Sol10u4 boot environment using ludelete and recreate the SVM mirror.

Pretty much it.


September 18, 2009 - Posted by | Solaris |


  1. Hi Gopi,

    Did your vxfs zone also get upgraded to u7?


    Comment by Severino De Alexandris | September 23, 2010 | Reply

    • Yes. Rino, Non-Global zones /zones all got updated to U7.

      Comment by gdesaboyina | September 24, 2010 | Reply

      • Thanks Gopi

        Comment by Severino De Alexandris | September 24, 2010

  2. Hi Gopi,

    I am planning to use live upgrade on
    Solaris 10 3/05 s10_74L2a SPARC
    Assembled 22 January 2005
    to upgrade it to the latest. I have zones with which is vxfs. My Global Zone is ufs.

    Any suggestion ?


    Comment by Hassan | January 19, 2011 | Reply

    • Hassan, then your setup is same as mine. How many zones you’ve ? are they on the same vxfs filesystem ? even if they are different filesystems, what u can do is get the LUNS which are same size as existing zone LUNS. then specify them in lucreate with -m option. That way even luupgrade doesn’t work, you won’t mess up existing LUNS. if ugrade is success you can decomission the old LUNS. Hope this helps. Pls write me how it goes. Good Luck

      Comment by gdesaboyina | January 28, 2011 | Reply

  3. fantastic…God bless you!!!

    Comment by moinudeen | July 1, 2012 | Reply

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: