Saturday, February 9, 2008

Oracle 10g CRS upgrade to 11g CRS

To upgrade Oracle 10g CRS to 11g CRS, we have two options -
  • Perform a rolling upgrade (this requires that your current CRS version >=10.2.0.3 (or 10.2.0.2 with Bundle patch). This option allows us to upgrade the CRS without a complete unavailabilty of downtime
  • Upgrade the CRS on all the nodes at the same time with complete downtime.
I will go over the steps for upgrading the CRS to 11g using the first option mentioned above.

On Node 1:

1. Stop the CRS: Either use $CRS_HOME/bin/crsctl stop crs as root or run the following from the staging as root:

/Staging_Area/11.1.0.6/clusterware/upgrade/preupdate.sh –crshome $CRS_HOME –crsuser oracle


2. Invoke runInstaller from the staging area:



3. As you notice, the option to specify the Home and the destination directory is disabled since OUI has detected the existence of a Cluster.


4. Select the local node (in my case it is alps01), selecting both the nodes requires the CRS to be down on both the nodes - meaning it would require a complete downtime - and would no more be a rolling upgrade:



5. Not shutting down the CRS on the local node would cause the following error:

6. This information would appear in the install logs - pretty detailed and informative :-)

7. Once you shutdown the CRS on the local node and when re-attempted, it should be fine (no errors or warnings):


8. On clicking next, the summary screen would appear. Note that we are upgrading the CRS only the local node (alps01):

9. Install progress screen:

10. At the end of the upgrade, the Installer would prompted us to run the rootupgrade script as root:


11. Output of the rootupgrade script:

12. Check the status of CRS versions:


On Node 2:
13. Shutdown the CRS on node 2. Note that while performing a rolling upgrade, we have to invoke the OUI from the second node and this can't be done from node 1.

14. Invoke the OUI from node 2 and select only the local node and deselect the remote or first node:

15. Cluster Verification screen....looks good.

17. Prompt to run rootupgrade script after the upgrade on Node 2:

18. Output of rootupgrade script on Node 2:

[root@everest02 upgrade]# /opt/oracle/product/crs/install/rootupgrade
Checking to see if Oracle CRS stack is already up...
WARNING: directory '/opt/oracle/product' is not owned by root
WARNING: directory '/opt/oracle' is not owned by root

Oracle Cluster Registry configuration upgraded successfully
Adding daemons to inittab
Attempting to start Oracle Clusterware stack
Waiting for Cluster Synchronization Services daemon to start
Waiting for Cluster Synchronization Services daemon to start
Waiting for Cluster Synchronization Services daemon to start
Cluster Synchronization Services daemon has started
Event Manager daemon has started
Cluster Ready Services daemon has started
Oracle CRS stack is running under init(1M)
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 2: everest02 everest02-priv everest02
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
clscfg -upgrade completed successfully
CRS stack on this node, is successfully upgraded to 11.1.0.6.0
Checking the existence of nodeapps on this node
Creating '/opt/oracle/product/crs/install/paramfile.crs' with data used for CRS configuration
Setting CRS configuration values in /opt/oracle/product/crs/install/paramfile.crs

[root@everest02 upgrade]#



19. Check or verify the version of CRS: